Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Pathways to Cellular Supremacy in Biocomputing (nature.com)
52 points by hardmaru on Dec 31, 2019 | hide | past | favorite | 24 comments


As someone who spent 10 years at the forefront of synthetic biology (and has programmed for 30+ years) I always found the biocomputing baffling. I would be willing to bet against Cellular Supremacy over any time frame, except evolutionary or geologic timeframes.


The only way I could see it having any relative application would be improved finite control of magnetic fields around metallocene or porphyrin cores, through engineered photo-reactive protein shells.


Useful evolutionary timeframes are days to months when the substrate reproduces in hours or minutes. Just a few generations can produce very significant adaptations via massive parallelism.


Your brain consumes about 40 watts and you wrote that sentence.


why do you assume that writing that sentence was computationally hard?


the closest computational equivalent to writing that sentence we have today involves executing a model with 1.5 billion parameters. (and I couldn't find an estimate of the energy cost of that)


Just for reference, you should know that the more-or-less minimum feature size for a biological structure is about 100 nm: https://en.wikipedia.org/wiki/DNA#/media/File:DNA_nanostruct...

that's only about 100x smaller than contemporary transistor features, so realistically there is a upper limit to the energy benefit of biocomputing structures which is about 10,000x, based on energy typically scales with size squared.


So? The brain had a heck of a lot more than 1.5B parameters


Are you measuring training and or inference ?


It's many orders of magnitude larger than 40 watts * 5 seconds. :)

I have a running bet with some friends that one of the following is true:

(1) The brain is somehow leveraging quantum computing to achieve polynomial or square root acceleration on combinatorial search and optimization problems.

(2) P=NP and there exist polynomial time classical algorithms for these problems.

(3) The naturalistic hypothesis fails and intelligence is somehow "supernatural" and does things that cannot be described or modeled within the confines of physical space-time.

I cannot think of any alternative that can possibly explain how the brain can do what it does on ~40 watts. Everything we have learned to date argues that intelligence and cognition involves a whole lot of massive combinatorial problems that can't possibly be performed classically on so little power.


I'm going to throw out (3) because it doesn't make any sense (to me), and we haven't found any evidence that this is true.

(2) seems possible, but highly unlikely.

(1) seems the most probable of the three options, and although I believe we have found evidence that biological systems exploit quantum effects in some instances, there doesn't seem to be any indication that brains (human or otherwise) use quantum effects for computation.

The thing that you seem to be discounting is that the bulk of the work has already been pre-computed. Our brains can do what they do in 5 seconds * 40 watts because they have been "designed" to do so via billions of years of evolution. In ML terms, the training stage has already happened by the time your brain starts thinking, it is simply doing inference at that point.


I agree that #1 is by far the most likely. #3 would mean we (meaning natural science) are wrong about the nature of the universe. I included it mostly to get across the mystery we have here, namely that what brains do cognitively on so little power appears to be "impossible" by classical CS metrics.

I think you are incorrect about precomputation though. The human genome is not very large. It's smaller than Windows 10 or Wikipedia. It's also not substantially different from that of a mouse or a chimpanzee. Most of what it encodes is highly conserved metabolic stuff. All the richness of human cognition is realized through a vanishingly small subset of that already small genetic code.

Nearly all learning and cognition happens after birth, meaning it's done by the brain (unless #3) using absurdly less energy than any known method of computation.


"The human genome is not very large"

I think this is reflective of a massive blindspot.

A program to print "hello world" isn't very large, but it doesn't compile itself or produce its own operating system or produce the hardware to run the OS to run the compiler...or produce the companies to produce the hardware and software...or produce the economy to produce the companies... Clearly there is information in the compiled program that is not in the source code or the language spec.


> The human genome is not very large.

This just means that the "algorithm of intelligence" is not terribly complicated. So we have hope of reverse engineering it.


That may be the case but I don't think it solves the power mystery. It may be a simple algorithm but it does an awful lot of np-hard/np-complete things on very little power. Among these are absurdly fast learning and fuzzy associative search.


> It's many orders of magnitude larger than 40 watts * 5 seconds. :)

Only if you also want to carry a fully preemptive running operating system with a posix layer, netfilter so you don't get hacked, retpol mitigations, openssl to negotiate tls connections, an ssh daemon so you have access, docker, electron, etc.


The brain is not a von-neumann architecture.

We have different architectures that can perform computation million times more efficiently than general computers. Of course, they lose on other axes (like precision).

Whats 54398456905 * 23423645745? Your 40 W brain can't compute that in a minute, yet a 0.01 W calculator can in a millisecond.


> Of course, they lose on other axes (like precision).

Well the most important axis that they don't have is UI. Von Neumann architectures are easy to program for.

Of course being able to program for means that it comes with a lot of overhead. Just running x86 linux consumes a ton of unnecessary power.


We can build less accurate computers and analog computers. Neither of these even begin to approach what brains can do. A self-driving car's computer takes hundreds of watts to run, uses reduced precision and custom silicon wherever possible, and does not begin to approach the navigational ability of a mouse or bird whose brain consumes less than one watt of power.

The human brain didn't evolve to perform consciously explicit and exact calculations on huge numbers, but our navigational and positional awareness abilities do far more impressive things with far more data much faster than this. A monstrous amount of effective but subconscious number crunching is involved in being aware of where your body is in space using nothing more than vision and sensorimotor feedback, taking apart auditory input (including FFT-like transforms), etc.

I really think CS people suffer from Dunning-Kreuger when they hand wave around the impressiveness of biological systems. Study some actual biology and neuroscience. What biological systems do as a normal part of metabolism and cognition is as awesome and mind-blowing as the vast energies, times, and distances found in astronomy. Computers are specialized devices that perform impressive feats of specialized computation but they do not even approach what biological systems do in terms of total data throughput per unit energy, learning ability, or associative and versatile memory to name just a few.

Edit: computers seem so impressive to us because we built them specifically to do the things we didn't evolve to do very well, but I have little doubt that if there were some kind of evolutionary forcing function selecting us for conscious explicit number crunching ability we would not need computers and wouldn't have built them.


> A self-driving car's computer takes hundreds of watts to run, uses reduced precision and custom silicon wherever possible, and does not begin to approach the navigational ability of a mouse or bird whose brain consumes less than one watt of power.

I would not trust the brain of a mouse or a bird to drive me in a car. Also the self-driving car computers which take hundreds of watts to run do not take advantage of custom silicon to the greatest possible extent, because the relevant algorithms are evolving rapidly. There is probably at least an order of magnitude or two of power efficiency that can be gained with current systems if the algorithms were truly baked into the chips.


I wasn't comparing performance at a specific task but performance at tasks of equal or greater difficulty.

Mouse and bird brains have evolved to operate mouse and bird bodies, not cars, and their learning ability isn't as powerful as a primate or a human so I doubt they could learn to drive a car as well as us or our specialized self-drive computers.

But... what they do manage in terms of controlling mouse and bird bodies is vastly more sophisticated and impressive than driving a car. A mouse runs around on four independently controlled legs and can tackle a vast array of terrains while dodging or chasing moving objects. Birds can navigate in 3d space while flying with articulated flapping wings with complex control surfaces operated by dozens of muscles.

Driving a car is ridiculously easy compared to anything like that. If mouse and bird brains had evolved to control cars I'd absolutely trust them to drive me around at least as much if not more than I trust a Tesla's autopilot. Driving is a simpler problem than operating a mouse body.

Don't get me wrong: our self-drive AIs are amazing engineering achievements. I'm just pointing out the impressive performance of tiny brains using fractions of a watt of power at much more difficult tasks.

The thing that blows my mind and makes me hypothesize quantum computing or even P=NP is the power requirements of those brains. It's "impossible." I'm not suggesting that we can't figure it out, just that we haven't yet and that it's probably going to take more or different approaches than we think it will take.

Immune systems were once considered so "impossible" that it led several researchers to abandon science in frustration, but we eventually got a good understanding of what was going on (and it's impressive!). Understanding immune systems had to wait for molecular genetics and modern evolutionary learning theory among other things. I suspect that really replicating brain-like performance will have to wait for something as far beyond our current state of the art as those were in the 1920s.


Parent was making the point that we have no computer with a similar architecture as the human brain (billions of tiny compute elements). Artificial neural networks try to simulate that, but the simulate billions of parameters on thousands of core (CPU/GPU).

Of course it's highly inefficient, just like for the brain is highly inneficient to exactly multiply two numbers.

So you also suffer of Dunning-Kreuger, you imagine that all that computers can be are von-neumann machines.


It appears that "Supremacy" isn't going away ;)

Overall, a very enjoyable read. As long as we are speculating on the limits of bio-based computatbility, The same physical constraints governing inanimate systems would of course apply to biochemical reactive systems.

What biologic does buy you is reproduction. An proclivity to seed machines over a wide geographic or cosmic area that is self-propelling.

https://en.wikipedia.org/wiki/Limits_of_computation


This way to name things give me the impression that science is not any more about advancing knowledge but about to find the most grandiloquent way to speak about your work. All this "quantum supremacy" and now "cellular supremacy" seems really disturbing.

As they propose the term in this article, I hope that someone will found a better name. It's an interesting subject and it deserve a better name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: