Apple Intelligence: These Bots Can't Reason and "Possibly" Never Will: Part 2

Apple Intelligence: These Bots Can't Reason and "Possibly" Never Will: Part 2
Photo by Thomas Lipke / Unsplash

If we review case studies like Theranos, WeWork, the latest Optimus Prime exposure, the 1,000 offshore cashiers required to run Amazon's "Just Walk Out" cashier-less grocery, the remote-driver fleets used to drive robo-taxis, the Gemini fake out, Self-Service BI—did Silicon Valley become the business of manufacturing false impression of theoretical success?


Today I'm going to talk about one of my heros Lucy Suchman and Gary Marcus. Suchman and Marcus are a little more senior than I but they were writing and I was studying at the cusp of the AI winter. The AI winter was a series of dead ends, both theoretical and in experimentation. They were so unsolvable, Rodney Brooks has called it the "cul-de-sac." We've collectively been led to believe that winter has thawed and changed. It never did. The problems we had in 1987 we still have now. It's only more obvious that we decided to spend trillions on proving it in real time experiments.

What's changed since then is the amount of control and resources we've been willing to give select people over the data sets and the circumstances of the software performance. Suchman is emeritus from University of Lancaster and also worked for Xerox. In 1987, she wrote about the complete failures of intelligent automation, and the inability of computer science to replicate meaningful action except in ways that were deliberately contrived, theatrical, and deceptive. Describing the need for "highly constrained" environments and narrow, limited tasks in order to succeed, with great effort, she suggests that we are better off using these failures to develop more satisfactory models for human cognition so that we can master human-computer interaction instead of perseverating on these limitations, which then seemed unsolvable. She was also aware that it would be easy to create the false impression of success. We all did thanks to John Searle's work and The Chinese Room Experiment. With an almost touching naiveté she writes:

"It may simply turn out that the resistance of meaningful action to simulation in the absence of any deep understanding will defend us against false impression of theoretical success. " (Plans and Situated Actions, 1987)

It's touchingly naive because of what she could not foresee in 1987. She did not foresee a billionaire class of people who have the willingness and resources to constrain environments to their own benefit. The ultimate game plan for robotaxis and driverless cars, for example, is from the Ford playbook: get the federal government to optimize the infrastructure to constrain the roads to cooperate with the cars so they can work. The taxpayer will start to pay for the constraints necessary to give bureaucrat "tenderpreneurs" the illusion of theoretical success.

More importantly, when Suchman was writing the above, Eastman-Kodak had barely started the trend of wholesale exporting of IT support work to human-rights negative nations. It's difficult to imagine hiring 1000 people to create the illusion of autonomous activity because it doesn't seem cost-effective. That's only if you assume human-rights and labor rights. Something that has become less necessary in the convening years. The one thing that tech innovation has accomplished is remote, distributed work forces. However, that is an innovation they do not want human-rights positive laborers to use. They only want to use it if they can use it to extract high cost labor at low rates and don't have to concern themselves about labor conditions. Hence, a worker in Charlotte finds herself forced to get up and commute to her office and sit in a cubicle on Zoom with India. When asked why, we are told its because of creativity, community, innovation, and to keep the TGI Fridays from going bankrupt. Uh-huh.

If we review case studies like Theranos, WeWork, the latest Optimus Prime exposure, the 1,000 offshore cashiers required to run Amazon's "Just Walk Out" cashier-less grocery, the remote-driver fleets used to drive robo-taxis, the Gemini fake out, Self-Service BI—did Silicon Valley become the business of manufacturing false impression of theoretical success?

The fakery and pageantry behind Optimus Bot and Musk's own confession that it costs more to run and maintain these robots than to hire humans, was being reported on dutifully by fearless auto tech reporters like Hyunjoo Jin, who has since gone on to co-create work that is winning Loebs and Pulitzer prizes. She was publishing about the failures of Optimus Bot 2 years ago. She doesn't seem to make the same headlines. Why?

At any rate. I'm super happy for our favorite AI curmudgeon Gary Marcus.

Now that Apple Intelligence declares and the LA Times reports—THESE BOTS CAN'T REASON, I'm seeing him quoted all over the place. Way to go, Gary! We're happy for you. It's important to note that other people have been for quite some time. I encourage people to look up Liza Dixon's work on autonowashing, and follow and read Emily Bender's work. Don't forget the courageous Timnit Gebru, also.

Prima facie, a list of concrete, manifestly documented unsolved problems plague classical computationalism. These problems are not only well known, they are intractable, to the degree that some people call them "constraints," and I happen to agree with that idea. I'm on record here and elsewhere, since 2003, as stating that we need a second-generation model for "computing" that doesn't rely on the philosophical errors within classical computationalism. The way classical computation encodes and manipulates information is useful for an array of tasks; however it presents limitations for both interpreting human language and using logic to solve puzzles as an agentic intelligence, independent of human inputs.