How flaky are the KERNAL RS-232 routines?

Started by gsteemso, March 18, 2008, 11:55 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

gsteemso

I see in some of the older topics here that the Kernal routines do not work around hardware bugs that can cause data loss if there is an interrupt at just the wrong moment. At least, I think that was the gist of it; the exact details are irrelevant to my purposes.

Some background:

I have rigged a direct modem-to-modem connection from my Mac (with external 28.8 modem) to my C128 (with Model 1670 dangling off the user port). I am working on a simple BASIC program, maybe with machine language subroutines if I need them, to access D64 images on the Mac via Zmodem and a terminal emulator. (The idea is that I can Zmodem across a D64 image and copy it straight onto the 1571 as it arrives, with just a bit of overhead for the file transfer voodoo. It doesn't matter how clumsy it is -- once it works, I can find, download and copy across something better.)

My question is, am I going to have to roll my own serial routines, or do the errors not really affect a transfer as slow as 1200 baud? Any advice will be appreciated.

G.
The world's only gsteemso

BigDumbDinosaur

Quote from: gsteemso on March 18, 2008, 11:55 AM
I see in some of the older topics here that the Kernal routines do not work around hardware bugs that can cause data loss if there is an interrupt at just the wrong moment. At least, I think that was the gist of it; the exact details are irrelevant to my purposes.

There are a number of problems with the fake EIA-232 routines in the kernel.  However, the numero uno problema is an often-encountered hardware defect in the 6526 CIA, to which you alluded.  What happens is if the interrupt control register is read a few I/O clock cycles before a timer B interrupt is to occur, timer B will not set its ICR flag and, in the case of the fake EIA-232 routines, a serious receive error will occur.

Quote
My question is, am I going to have to roll my own serial routines, or do the errors not really affect a transfer as slow as 1200 baud? Any advice will be appreciated.

The CIA problem will bite you no matter the speed.  So, if you are going to do this with the CBM serial functions be sure to compute and verify a CRC on the data.  1200 bps on the 128 in FAST mode is reliable, other than the CIA problem.  2400 is shaky at best, and the overhead from processing all those NMIs becomes pretty severe.  If speed and reliability are important, you should be doing this in hardware at the C-128 end of the pipe.
x86?  We ain't got no x86.  We don't need no stinking x86!

gsteemso

Quote from: BigDumbDinosaur on March 19, 2008, 01:23 AM
There are a number of problems with the fake EIA-232 routines in the kernel.  However, the numero uno problema is an often-encountered hardware defect in the 6526 CIA, to which you alluded.  What happens is if the interrupt control register is read a few I/O clock cycles before a timer B interrupt is to occur, timer B will not set its ICR flag and, in the case of the fake EIA-232 routines, a serious receive error will occur.

It seems to me that the hardware defect should be avoidable by manually polling Timer B every time I access the ICR. Does anyone know if that works, and are there any other compelling reasons I should write my own routines? I'm not bothered about throughput -- all I have for interface hardware is a 1200 baud Model 1670 -- but I am concerned with accuracy. I definitely like the CRC idea, but if I can cut down on the number of retransmissions needed that's always good too.
The world's only gsteemso

BigDumbDinosaur

Quote from: gsteemso on March 20, 2008, 12:16 AM
Quote from: BigDumbDinosaur on March 19, 2008, 01:23 AM
There are a number of problems with the fake EIA-232 routines in the kernel.  However, the numero uno problema is an often-encountered hardware defect in the 6526 CIA, to which you alluded.  What happens is if the interrupt control register is read a few I/O clock cycles before a timer B interrupt is to occur, timer B will not set its ICR flag and, in the case of the fake EIA-232 routines, a serious receive error will occur.

It seems to me that the hardware defect should be avoidable by manually polling Timer B every time I access the ICR. Does anyone know if that works, and are there any other compelling reasons I should write my own routines? I'm not bothered about throughput -- all I have for interface hardware is a 1200 baud Model 1670 -- but I am concerned with accuracy. I definitely like the CRC idea, but if I can cut down on the number of retransmissions needed that's always good too.

See http://www.csbruce.com/~csbruce/cbm/transactor/v9/i3/p062.html for a discussion on dealing with the ICR bug.
x86?  We ain't got no x86.  We don't need no stinking x86!

airship

Just went over and read the article. Conclusion: The Transactor was and is AWESOME!!
Serving up content-free posts on the Interwebs since 1983.
History of INFO Magazine

BigDumbDinosaur

BTW, aside from the timer-B bug in the CIA there's also the TOD alarm bug.  As you know, it's possible to arrange for the TOD clock to generate an interrupt at a particular time of day.  It turns out that the alarm won't always go off if the tenths of seconds are exactly zero at alarm time.  The solution is to set the alarm to go off when the tenths aren't zero.

Just about every I/O chip made by CSG has had some sort of design defect.  The two worst were, without a doubt, the 6526 and the first design 8563.
x86?  We ain't got no x86.  We don't need no stinking x86!