
In a continuing example of just how versatile some early computers were, a replica of an Apple 1 showed up at the Vintage Computer Festival West run by the Computer History Museum, upgraded with a modern Wi-Fi module, and running the latest instance of ChatGPT. Better yet, it was put together by original Apple employee and Macintosh legend, Daniel Kottke.
The Apple 1 was originally released in 1976 and featured a 1 MHz processor, just 4KB of memory, and 256 bytes of storage. Targeting the hobbyist market, it was relatively expensive at the time at $666.66 - equivalent to around $3,700 in today's money. It was short-lived, though, replaced by the Apple II just three years later, and only around 200 were sold.
That's probably why even an Apple II engineer like Kottke built his latest project around a replica, but it's still impressive. The replica hardware has just a handful of integrated circuits (as ChatGPT understands in the above video, it seems), but is hooked up to a modern Wi-Fi module, giving the ancient-looking hardware a connection to the wider modern world of connected technology.
Using that to connect the monochrome system to a cutting-edge AI technology is a true blending of two worlds. The old and the new. No wonder Kottke drew a small crowd of intrigued enthusiasts at even this kind of niche event.
It also highlights one of the real strengths of modern cloud-connected computing. AI like ChatGPT is notoriously power and performance-hungry, demanding huge networks of expensive graphics processing hardware and fast storage to run at speed. But none of that needs to be run locally, so even the slowest of systems can take advantage if they can get online and receive text responses.
In the case of the Apple 1, ChatGPT, which was trained on tens of millions of dollars worth of the most cutting-edge graphical hardware in 2025, could run on a system that replicated hardware from almost 50 years before. A single megahertz processor was enough to give Kottke a fast response to a query, despite translating through decades of development and hardware progression.
Although there's an argument to be made that AI on the edge, running entirely locally on individual devices, is much better for privacy and security, when the latency of responses is so low, it means AI can run on just about anything. And probably will run on just about everything before too long, if the latest trends are anything to go by.
Thanks to Chris Skitch for the tip on this one.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.