A rant in favor of C

  • Published 05 Jan 2018

That’s a story I copied from some anonymous poster on 4chan’s technology board about how stuff used to be before we had all kinds of abstractions to help us use a computer, along with a shorter story on his grandfather who worked at Kodak.

First off, this all started when another poster complained that C was a “fundamentally flawed language that has many errors”.

Okay, what y’all youngins need to understand is what programming looked like before C, and what it looked like with C. Look at pic related, look at this beautiful thing. This is a Data General Nova, first released in 1969, and my grandfather had his employers buy him one, because they were sick of him hogging the mainframe for weeks on end. I think at first they got him a PDP-8, but he got sick of that thing QUICK, he needed a REAL machine. He wasn’t gonna settle for a Nova 1200, nuh, he needed the fastest, so he got a Nova 800. 800ns cycle time, 300kHz CPU, 16k of memory (8k words), and by the heavens, a 1.4MB hard disk. Now, in 1972, what the hell are you gonna do with a 1.4MB hard disk? This wasn’t a mainframe, this was a personal minicomputer! Well, first he’d sit down and read through hundreds of pages of technical documents to understand the architecture, then he’d play around keying stuff into the front panel, then he’d hook up a punch card reader and start writing programs.

He would program everything in machine code, he wrote his own disk drivers, task scheduler, everything. He’d keep hand drawn spreadsheets to track where he’d put stuff in memory, so he could remember for later if something needed to be accessed. HE WAS MANUALLY PUNCHING IN MEMORY LOCATIONS FROM A HAND DRAWN CHART. What he did with it, well, he spent months running simulations to work out why NTSC was shifting colours when they broadcast it over around 800 km, tasked with coming up with a solution, coining the phrase “Never The Same colour Twice”, and saying “Know what? That PAL thing looks nice, let’s just use that instead”.

He decided that maybe computers could be used to sharpen film pictures, but first he needed a digital copy of an image. So he hooked up a TV camera, had the “nerds” make an interface box for him so he could plug it into his Nova, then wrote all the software needed to read from an analog TV camera, write to disk, using a 300kHz processor in 16k of memory. One scan, producing an image with about a million points (1 megapixel), IN 1972, took about 12 hours to run, so he’d focus the camera on a picture, start the scan, go home, and when he got back to work the next day, it’d be done. This was using stuff he’d just found lying around his lab.

But that’s not the crazy part, he wasn’t trying to make a scanner or camera, he’d already seen NASA do that over in the US years before, he wanted to make an image sharpener. So, in pure machine code, by hand, with no memory management provided whatsoever, he started writing a program, by means of punching holes in cards, that would traverse the entire image, section by section, recognizing patterns, and modifying the image appropriately. To run this on a single color channel of an image took 1 week, and you’d need to do all three channels separately. Surprise surprise, he mostly just did it all in grayscale. It worked, it worked quite well, and you damn well bet Kodak patented the heck out of it, but dedicating a $8,000 minicomputer ($47,000 adjusted), FOR THREE WEEKS, just to sharpen an image, in 1972 just wasn’t viable. Plus there was no way of putting the image back on film. It could be displayed on a persistent CRT, well, an old oscilloscope Gramps found (a brand new oscilloscope gramps pinched from the guys next door), but you’d have to zoom in on a section to see the quality benefit, and it was still all green.

Now, this is the sorta thing people where doing on these machines in the early 70’s. All memory was handled by manually hard coding addresses, all for the specific machine they had in front of them. You’d be re-writing everything for a different variation of the same model of machine, just because of timing (Unless you were just running calculations and the memory layout was the same, but good luck with that teletype if your baud rate’s now 23.5% faster. Where’s that variable that holds the timing? Oh wait that doesn’t exist yet, the whole thing’s coded around a fixed rate)

EVERYTHING you did was like this, even (especially) disk access. You working on a mainframe and wanted to store some data, you’d manually type in the cylinder and sectors you wanted to read or write from, and every now and then, a few minutes after you wrote something, you’d hear someone else across the room yell at the top of their lungs because you just wrote over one of their databases. No filesystem, NOT EVEN FILES. There’s just a list of employee’s on the chalkboard and the ranges they’re allocated. If you were being real fancy, you’d write this as a document to the first few sectors of the disk, which would actually work if everyone’s on a timeshare system and can just load up a text viewer, but when you’ve already loaded a program, chalkboard it is. Unless you’re being cheeky and think you can remember your sectors off by heart, then just mash ‘em in dude, she’ll be roit.

So this is how everything worked and everyone got on with it pretty well, not many people were complaining.

But then a couple guys over in the US decided they’d had enough of re-writing programs, nearly copying punch card to punch card verbatim spare memory addresses, so they decided to do a little thing called permanently changing what it meant to program and use a computer for the rest of history.

So these guys at MIT decided “Know what? We should make a light abstraction layer to machine code, so rather than writing your programs for just one machine, rather than re-writing damn near an entire operating system for every single task you wanted to do, what if you could write a generic program in human readable, English ASCII, and the only thing you’d need to write per machine was a compiler, to take the ASCII instructions and turn it into machine code? What if rather than keeping 20 or so punch cards you’d slot in halfway through your program if you needed disk access, and another 20 for teletype, and another 20 for every other feature, what if we could write an ASCII library of all these functions and you could bundle it into your program if you needed it? What if we could precompile them, so they could just sit ready on your machine for whenever you needed them, no need to even build them into every program? What if, rather than manually typing in memory addresses, you could store stuff in variables that the rest of the program can access BY ENGLISH NAME? Or, when you need to, store memory addresses in variables, which you could still access by name? What if you automated all this, so the program itself allocated, read, and wrote memory addresses to the variables, so no programmer would ever need to do it themselves?

So, for a machine, all you’d need to make is a compiler, which you could do on another machine using ASCII and compiling it all there, libraries for that specific machine, written in ASCII, that you could either dynamically build into your program, or just precompile and store somewhere if you ever needed it, and literally everything else is just portable code you could move from machine to machine.

THIS is the level we’re working at, OP. For some newage twirp to say it’s a “fundamentally flawed language and has many errors” is just mentally painful. By errors he means “It doesn’t do this thing we have space to run now, it not doing something that other things do is a FLAW and an ERROR” We went from “Manually type in the address of the thing you want, or maybe offset stuff from an address if you really want, either way you’re keeping a spreadsheet full of hex codes on your desk if you like it or not” to “Lol here it all is, named and everything, no need to know the address, you can simply copy the address of a variable to another variable and use the second to access the first, if something out of scope wants direct access to the first, just give it the second and it can do what the fuck it wants, go wild with math if you like but don’t be a twat and TRY to keep track of where everything is” Do you even begin to understand how much of a DREAM this shit was to programmers? It’s all these halfwit newbies that think the compiler should keep everything safe for them no matter what that look at C and go “Now, this language so utterly pure and without clutter, it’s FUNDAMENTALLY FLAWED and HAS MANY ERRORS because IT’S NOT FULL OF SHIT THAT DOES EVERYTHING FOR ME, IT DOESN’T GET IN THE WAY, IT DOESN’T STOP ME FROM DOING WHAT I WANT, IT EXPECTS ME TO KNOW WHAT I’M DOING, IT’S SO GARBAGE BECAUSE I’M TO FUCKING DUMB TO FOLLOW THE SIMPLEST RULEBOOK IN PROGRAMMING”

YES YOU HAVE TO KEEP A VAGUE IDEA OF HOW YOUR MEMORY WORKS, BUT C DOES SO BLOODY MUCH FOR YOU THAT ANYTHING MORE IT LITERALLY “BABBIE FIRST DRAG AND DROP HOCKEY GAME” IT’S NOT EVEN MANAGING MEMORY, IT’S SITTING IN YOUR COMFY OFFICE TELLING THE PR GUY TO GET RID OF DEPARTMENTS YOU DON’T NEED ANYMORE, AND ADDRESSING EMPLOYEES BY THEIR BADGE NUMBER, NOT THEIR NAME, GENDER, WHAT OR WHO THEY ARE, JUST A SIMPLE NUMBER, AND PEOPLE STILL MANAGE TO SCREW IT ALL UP AND SAY “IT’S TOO HARD”

Someone in then replied with:

Damn, grandpappy sounds like a real big brain nigga.

To which he then said:

Kodak only kept the best. Surprising how much he got around to, just as a side note, he developed some architecture software for them which they used to build a new factory, basically CAD software, so he moved to New Zealand for a few years to oversee construction (this was in 1960 to 1964 if my dates are correct). I don’t know who’s fault it was, but the designer left out the space between the walls in the design, they didn’t realize till well after construction had started, it was a big fun mess and many laughs were had.

“Well Fred, I guess there’s always gonna be problems when you play around with new tech, using computers for the sake of using computers might have been a bit silly and pointless here (Keep in mind they literally invented CAD; they fixed the design and sent a reel to reel tape over to the US which they used to build multiple new factories, literally copy and pasting entire buildings, the ENTIRE BUILDING PLAN AND ALL INFO, LITERALLY THE ONLY THING BEING SENT OVERSEAS, NO PAPERWORK, JUST A TAPE, AND THEY USED THAT TAPE TO MAKE A WHOLE BUILDING), I guess there’s gonna be the occasional couple million dollar mistake here and there lol, anyway building’s done, here, have a raise and go home”

His job was literally to do whatever he wanted, so that’s why he ended up playing around with image sharpening, every now and then a new boss would show up and go “Hey, what’s this guy on our payroll without a job description doing, and why is he earning more than me?”. Some problem would crop up in the next month or so, he’d save the day, and the boss/new manager would make sure no-one bothered him or upset him in the slightest. Still, I’m sure they were SOMEWHAT mad at times, once he was developing a new glue for their slides, because they couldn’t import the toxic chemicals from the US they’d been using up till that point anymore. Long story short he wrapped a 1 foot in diameter, 20 foot long steel roller in miles of cardboard and his new glue, expecting it to come loose when he applied some heat, but it stuck like a rock and they had to chuck this massive, precision engineered roller into the trash and make a new one. (Happy end, eventually he DID get the glue working, and every slide in the Asia pacific region was using his special blend for decades after)