$5 PC up and running

I could not pass up a compact Dell I3 (Gen 2 dual core) computer with 8 gigs of ram minus HD for $5. I found a 250GB SSD for $11 on Amazon.

As always I put in a new backup battery when setting up a refurb. I had to remove the CPU heat sink to do this but I always replace the heatsink compound anyway. I noticed that the heat sink was scratched like it had already been removed. I then realized that the backup battery must have been replaced at one time and they used a metallic tool to scrape the old heatsink compound off.

It is quick without all the crap on it. They had an I5 also for $5 but it was larger. I am going to go back and see if it is still there just to swap the processor. The Dell machines are great at being able to upgrade the processors. Still pondering on what I am going to do with this one!


I do love a cheap workhorse PC. Nice find!


Up to my neck in old pc’s.

All the people I know dump them off with me to ensure their disks are clean and I try to find homes for them…

If someone can use them, better still…



Well my $5 pc is now a $21 pc. I purchased an I5-3.1GHZ, 4 core processor for $5 on eBay. I went back for the I5 but of course it was gone.

It’s running on all 4 cylinders now! It is interesting how fast the price goes up for an I7. I am still not sure what I am going to do with this additional pc so I really do not want to spend a lot of money on it if I do decide to donate it to someone who needs it.

I am leaning toward replacing the computer I use on my bench. It is an ASUS stick computer which takes up absolutely no space but is slow for a lot of the applications I use. It was purchased originally to use in my camper but I now use my phone to stream to the TV.

1 Like

Did you try Ubuntu or one of the Linux versions… ?


@jkwilborn that’s what I’m going to try and see if it works on it.

1 Like

I remember when I was porting Linux to “Itanic” (Intel’s ill-conceived Itanium processor), 64-bit systems were esoteric systems that were going to be relevant only to the enterprise. The idea that relatively soon, an 8GB 64-bit 4-core machine would be old and small and barely able to run a modern OS would have blown my mind. :smiling_face:


Geese, you make me feel old… First machine I built was an 8008 and had 1024 bytes of available memory… I could compile the source in assembler on the mainframe and generate different output formats, but had to hand load it into the machine… Just used hex…

I was ecstatic when I could afford a punched paper tape reader… and another 1k of memory :face_with_spiral_eyes:

Even the standards had to be modified… in digital world.

The standard giga means 109 or 10gHz is 109 Hz.

It was rather ambiguous up until the late 90’s because anything addressed with bits has to be related in base 2 or binary. Like 1k isn’t 1000 in the digital realm, it’s 1024 with address lines.

This was corrected in by the standards people and assigned gigi, so 1 gigibyte is 230 which is 10243, both having a binary basis…

I felt fortunate to be able to bring home a Datel terminal with a modem from work and dial up the mainframe. The Datel is based on an IBM Selectric that Datel implemented computer control over a land line. The motors 60Hz power dictated the communication speed of 134.5 baud. All from the comforts of home…

Used to run adventure a text game written in B predecessor of C language… :exploding_head:

Sorry if I got off the topic here… just an interesting tidbit.



Honeywell and Motorola had a big presence here in Arizona in the '60s and '70s. I remember one of my friends had a terminal at home and we would dial into the mainframes with an acoustic coupler and play Adventure.

1 Like

Intel both became the processor to have because of their tight relationship with Microsoft but also fell on their face a few times because of that tight relationship and their failure to progress the software forward. I remember getting a 150MHz PentiumPro for my OS/2 machine and it really screamed but Windows 95 or was it 98 users saw no speed boosts and sometimes slowdowns. IIRC it was optimized for 32bits and Windows was 16bit under the hood. Windows NT would see a speed boost but it probably had fewer business users than OS/2 had at the time.

And then the BP6 motherboard came out with 2 CPU sockets and I got a hold of the OS/2 server kernel which was pretty cool but OMG Linux flew too on dual CPUs. I was working on a data aqu system used on Atlas V launchpad which was running on Sparc but I ported the software to my BP6 running Linux our little 4 developer group used it to compile and test before moving it to Sparc for final testing. Before that gig I was doing OS/2 at SPAWAR and OS/2 flew on the PentiumPro vs NT. I left when I was told the software was moving to NT and that’s when I got the gig working on the Atlas V launch system component.

Basically Intel built some nice hardware but Windows sucked dirt and was crawling in the 16bit world and NT multi-tasking was pitiful on multi-processor systems. Again, OS/2 on 1 CPU outperformed NT on 2 CPUs in many tests which were allowed to be published or were published and then taken down.

The “Itanic” was quite the gamble and while I never got to touch one to see how it did it was never going to be anything they could leverage Windows marketing capabilities to move many units. They might have been taking these leaps because it looked like the PowerPC was going to take the workstation markets via Taligent but when you don’t have Microsoft on your side, their marketing will clobber anything not made by or for Microsoft.

Were you porting Linux to Itanic at Redhat or for the fun of it?

I hope @HalfNormal doesn’t mind the tangent we’ve gone off on here! :smiling_face:

When Itanic came out, Intel asked Red Hat to port it, and we quoted a price. They got a lower bid from another company, so they took the low bid. Intel eventually decided that they would come back and pay us after all. :sunglasses:

I don’t remember anyone at Red Hat being convinced that this was going to take the market by storm, but one of the benefits of maintaining an OS on multiple architectures is that it can make the code more robust on all architectures. We upstreamed lots of general bug fixes as part of the Itanic port (it exposed lots of “latent bugs”) and I think that work ended up a net positive for quality at Red Hat and in the larger open source world.

Not only because of the bootstrap work, but also because after that I did a lot of the porting work and bug fixes, Intel eventually asked me to come to Oregon to give a group of Intel employees a day of training on how to port to Itanic.

In general, bootstrapping Linux from cross-builds is how I got my start with Linux, shortly after Linus first announced the kernel (I ordered a new hard drive for Linux I think the day after the announcement, and it took about a week to arrive, during which I was very impatient). I cross-compiled my bootstrap on Sparc running sunos.

My recollection is that how I installed my first ever Linux system, one of the first few dozen in the world to run Linux as far as I can tell, was:

  • FDISK on DOS to create partitions
  • boot+root disk gave me kernel, shell, and static mkfs and mknod
  • make root filesystem
  • make device files especially /dev/fd0
  • copy the shell to the filesystem
  • on sparc, binary edit the boot record to set the root device to the hard disk
  • on sparc, build tar binary, dd it to a floppy
  • on linux, cat < /dev/fd0 > /bin/tar (the extra bytes at the end don’t hurt)
  • on sparc, tar the tar binary to the floppy so the floppy is now a tar image of tar
  • on linux, tar xf /dev/fd0 to replace the tar binary with one of the right length
  • keep cross-building binaries on sparc and using tar to populate the linux system with build tools (e.g. libc, gcc, make, etc.)
  • switch to native build on the Linux system

At Red Hat, we bootstrapped Itanic at least twice. I did it the first time because I happened to be the one available, and I think Bill Nottingham did it the second time, which I vaguely recall was to make sure that we had a clean build and didn’t depend on any accidental artifacts of the initial bootstrap. I think each time took 1-2 days total effort; it really wasn’t that hard if you knew how to do it. We had plenty of other folks who could have done it too.

This is why I have recommended to lots of people who have wanted to learn how Linux is put together that they go through Linux From Scratch. Details have changed, but doing a bootstrap like that makes you really aware of the dependency graph, which is definitely a cyclic graph… :smiling_face:


@mcdanlj @dougl I do not mind at all. I enjoy the reminiscing and learning about history that I was always curious about.

@mcdanlj I remember when red hat was ready to go public, I told my brother-in-law who was an investor at the time to put all his money into it! I said you will be rich! As for me, it was not the investing type and should have listened to my own advice!


It would have been quite the roller coaster ride for him though! A lot of people who bought near the early peak lost a lot when it slumped after that before it strengthened again.

1 Like

Seems like that’s all we hear, probably because of its high occurance :face_with_spiral_eyes:

So far I’ve been taken to the cleaners with my Tesla stock… :sob: Hoping the lithium mining stock will help out…

@HalfNormal, I was at Honeywell at I-17 and Thunderbird…


1 Like

Heh, used to play this on my 5150, along with Castle and Rogue.

1 Like

I had a lot of friends that worked there too. I even played on the Honeywell volleyball team!