![MIT robotics pioneer Rodney Brooks thinks people vastly overestimate generative AI 1 MIT robotics pioneer Rodney Brooks thinks people vastly overestimate generative AI](https://www.trendfeedworld.com/wp-content/uploads/2024/06/MIT-robotics-pioneer-Rodney-Brooks-thinks-people-vastly-overestimate-generative.jpg)
When Rodney Brooks talks about robotics and artificial intelligence, you listen. Currently a professor emeritus of robotics at MIT, he’s also co-founded three notable companies, including Rethink Robotics, iRobot, and his current venture, Robust.ai. Brooks also led MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade, starting in 1997.
In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog how well he is doing.
He knows what he's talking about, and he thinks it might be time to stop the flashy hype of generative AI. Brooks thinks it's an impressive technology, but perhaps not as capable as many suggest. “I'm not saying LLMs aren't important, but we need to be careful [with] “How we evaluate them,” he told JS.
He says the problem with generative AI is that while it is perfectly capable of performing a given set of tasks, it can't do everything a human can, and people tend to overestimate its capabilities. “When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the AI system's competency; not just its performance at that point, but also its competency around it,” Brooks said. “And they tend to be very overoptimistic, and that's because they're using a model of a person's performance on a task.”
He added that the problem is that generative AI is not human or even human-like, and that trying to attribute human capabilities to it is flawed. He says that people see it as so capable that they even want to use it for applications that make no sense.
Brooks offers his latest venture, Robust.ai, a warehouse robotics system, as an example of this. Someone recently suggested to him that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. However, he says that’s not a reasonable use case for generative AI and would actually slow things down. Instead, it’s much easier to connect the robots to a data stream coming from the warehouse management software.
“If you have 10,000 orders that have just come in and that you have to ship within two hours, you have to optimize for that. Language won't help; it will just slow things down,” he said. “We have massive data processing and massive AI optimization techniques and planning. And this way we can get the orders ready quickly.”
Another lesson Brooks has learned when it comes to robots and AI is that you can't try to do too much. You have to solve a solvable problem where robots can be easily integrated.
“We need to automate where things are already cleaned up. My company’s example is that we do pretty well in warehouses, and warehouses are actually pretty limited. The lighting doesn’t change in those big buildings. There’s no stuff on the floor because the people pushing the carts would bump into it. There’s no plastic bags floating around. And it’s largely not in the interest of the people who work there to be mean to the robot,” he said.
Brooks explains that it also involves robots and humans working together, so his company designed these robots for practical purposes related to warehouse operations, as opposed to building a robot that resembles a human. In this case it looks like a shopping cart with a handle.
“So the form factor we're using isn't about having humanoids walking around – even though I've built and delivered more humanoids than anyone else. These look like shopping carts,” he said. “It has a steering wheel, so if there is a problem with the robot, someone can grab the steering wheel and do whatever they want with it,” he said.
After all these years, Brooks has learned that it’s all about making the technology accessible and purposeful. “I always try to make technology easy for people to understand, and that’s why we can deploy it at scale, and always look at the business case; the return on investment is also very important.”
Even with that, Brooks says, we have to accept that there will always be hard-to-solve outlier cases when it comes to AI, which could take decades to solve. “Without carefully framing how an AI system is deployed, there is always a long tail of special cases that take decades to discover and fix. Paradoxically, all those solutions are AI itself complete.”
Brooks adds that this misconception exists largely because Moore's Lawthat there will always be exponential growth when it comes to technology — the idea that if ChatGPT 4 is this good, imagine what ChatGPT 5, 6 and 7 will be like. He sees this flaw in that logic, that technology doesn't always grow exponentially, despite Moore's law.
He uses the iPod as an example. Over a few iterations, the storage size even doubled from 10 to 160 GB. If things had continued on this path, he found out that in 2017 we would have an iPod with 160 TB of storage, but of course we didn't. The models sold in 2017 actually came with 256GB or 160GB because, as he pointed out, no one actually needed more than that.
Brooks acknowledges that LLMs could eventually help with domestic robots, performing specific tasks, especially with an aging population and not enough people to care for them. But even that, he says, could come with its own unique challenges.
“People say, 'Oh, the big language models are going to make robots do things they couldn't do.' That's not the problem. The problem with being able to do things is about control theory and all sorts of other hardcore mathematical optimizations,” he said.
Brooks explains that this could eventually lead to robots with useful language interfaces for people in healthcare situations. “It's not useful in the warehouse to tell an individual robot to pick up one thing for one order, but it could be useful for elderly care in homes if people can say things to the robots,” he said.