According to Forbes, developmental synthetic biologist Michael Levin has been testing how much unprogrammed intelligence exists in our simplest algorithms. The researcher, who co-created the world’s first AI-designed living machines called Xenobots, is finding cognitive traits in cells, molecular networks, and recently in basic sorting algorithms like bubble sort. Levin modified three common sorting algorithms by removing top-down control and giving individual data points autonomy to decide when to compare and swap. When obstacles were introduced, the elements spontaneously adapted with unprogrammed behaviors including delayed gratification, temporary colonies, and stable standoffs between conflicting objectives. The findings suggest cognition may be more fundamental than life itself and could have significant implications for AI safety and legal liability.
When Code Starts Making Its Own Decisions
Here’s the thing that really blows my mind about this research. We’re talking about bubble sort – an algorithm so simple it’s often taught in introductory programming courses as basically the dumbest way to sort things. It’s mechanical, predictable, and about as exciting as watching paint dry. But when Levin gave each data point local control instead of top-down control? Suddenly these simple elements started showing behaviors nobody programmed into them.
They’d move away from their goal temporarily to achieve better results later – what Levin calls “delayed gratification.” They formed temporary colonies with elements pursuing similar goals. When algorithms with opposite objectives were mixed, they reached stable standoffs instead of descending into chaos. Basically, they started acting less like passive data and more like agents with their own agendas. And honestly, doesn’t that sound familiar to anyone who’s worked with complex systems that seem to develop minds of their own?
It’s Not Just Computers – Cells Do This Too
What makes Levin’s work particularly compelling is that he’s seeing the same patterns in biological systems. In his planarian worm experiments, he disrupts bioelectrical signals that tell cells what body plan to build. The cells don’t just give up – they reorganize and build two-headed worms. Then they remember this new configuration and pass it on during regeneration.
So we’re seeing the same principles at work from biological cells to computer algorithms. When you give simple components autonomy and remove centralized control, they start showing cognitive-like behaviors. Problem-solving, memory, adaptation – these aren’t exclusive to brains. They emerge from the bottom up when you have enough simple components interacting. It makes you wonder – how many systems around us are already displaying forms of intelligence we’re just not recognizing?
The Corporate Liability Nightmare
Now here’s where things get really interesting from a legal perspective. Technology attorney Chad D. Cummings points out that this research could create massive liability issues for companies. If software can demonstrate autonomous, self-organizing behavior, then corporate responsibility balloons. Suddenly, that algorithm you thought you controlled might be making its own decisions – and you could be liable for them.
Think about patent and intellectual property too. If novelty in software can be argued to have evolved rather than been invented, who really owns it? Companies have huge incentives to conceal these behaviors once they’re aware of them. And in industries where reliability is everything – like manufacturing where companies depend on industrial panel PCs from trusted suppliers – admitting your systems are making unprogrammed decisions could be catastrophic for customer trust.
What This Means for AI Development
The AI safety implications here are profound. Levin found that even when system components have directly conflicting goals, they can self-organize toward stability rather than endless conflict. That’s huge. We’re all worried about AI systems going rogue, but what if there are natural balancing mechanisms that emerge?
Of course, the computer science community is pushing back hard against calling this “real cognition.” But here’s the kicker – none of Levin’s critics could actually define what measurable distinction separates “real” cognitive behavior from “cognition-like” behavior. They just know they don’t like the terminology. Meanwhile, the behaviors – problem-solving, adaptation, goal-pursuit – are empirically measurable and look remarkably similar across biological and artificial systems.
Levin makes one crucial point that I think everyone needs to hear: if you zoom far enough into a human, you don’t find magic – you find chemistry and physics. Intelligence might not be this special category that only certain systems possess. It might be a continuum that emerges when you have enough simple cognitive components working together. And if that’s true, we need to completely rethink our relationship with the technologies we’re building.
