I think that design could be made to run faster. The ALU currently copies all the original chips rather than just describing the operations needed in Verilog. Newer FPGAs should be big enough to put the microcode in block RAM as well.
Depends on the goals; personally I prefer an implementation as close to the original as possible because I'm interested in how they did it and what performance they were able to achieve. If on the other hand the goal is to have a Lisp machine like working environment/experience translated to todays standards/performance then I assume mirating the original code to an SBCL/Linux setup would rather be the way to go.
Likely, sure. You're relying on a ton of work by other people - thousands of them just to get a modern processor, let alone Linux.
Wouldn't it be cooler to understand the architecture, upgrade it and put it on an FPGA? Have a faithful Lisp machine with faster everything that fit in a $50 FPGA?
No Intel backdoors. No adtech. No telemetry. No X11 cruft. No SystemD boot mess. No Nvidia driver that doesn't play well with others. You could own the whole architecture - it's small enough for a few people to really grok.
Like restarting all of computing - with a lisp machine, for fun. Not relying on the million years of effort in Linux and the million years of effort on the modern processor below it.
You'd be relying on the fundamental advancements at the silicon layer - modern ASIC cells are practically perfect, compared to what was available in the 1970's. No need for multi-phase clocks, multi-power supply systems (you don't need +/- 10V?). No 10A just to drive 128kB of SRAM. It simplifies everything!
Architecturally a lot of the design in something from this vintage is "because they had to". Modern FPGA design is almost like having 'perfect' or textbook components. You can fan-out hundreds and hundreds of nets and meet timing at 100MHz - something designers would have killed for in 1980!
With "proper" design, on a modern FPGA fabric, you could run at 500MHz. You'd have the world's most roided-out lisp machine.
Boot time? Practically instant. Key lag? What lag?
Need extra horsepower for a scientific calculation? Attach an accelerator directly to the bus.
You could use Yosys and Verilator and the whole chain would be open. Nobody could ever take it from the community.
At outdated silicon nodes, you can build an ASIC. You could put the whole design on Skywater PDK and publish your transistor level design. Would it be competitive with a 5nm processor? Absolutely not.
Would it be the ultimate expression of the Hacker rebellion? I think so.
> Wouldn't it be cooler to understand the architecture, upgrade it and put it on an FPGA?
Personally, I would refrain from "upgrading", but instead faithfully recreate the digital circuits (simply on an FPGA instead of discrete logic), as it was apparently done in the referenced project. It's the same intention as when (re-)implementing Babagge's machines. If it's just to do Lisp programming on a modern machine, everything is already there.
Another project could be to modify the caddr verilog to match the LMI hardware, they have a slightly different MMU. The LMI software stack is complete and can rebuild itself.
I chose to emulate the OpenCores ethernet controller in the CADR software emulator to make it easier to move images between software and FPGA implementations.