Cancel Preloader

Red Kill Switch for AI Autonomous Systems May Not be a Life Saver

 Red Kill Switch for AI Autonomous Systems May Not be a Life Saver

By Lance Eliot, The AI Trends Insider

We all seem to know what a red stop button or kill switch does.

Whenever you believe that a contraption is going haywire, you merely reach for the red stop button or kill switch and shut the erratic gadgetry down. This urgent knockout can be implemented via a bright red button that is pushed, or by using an actual pull-here switch, or a shutdown knob, a shutoff lever, etc. Alternatively, another approach involves simply pulling the power plug (literally doing so or might allude to some other means of cutting off the electrical power to a system).

Besides utilizing these stopping acts in the real-world, a plethora of movies and science fiction tales have portrayed big red buttons or their equivalent as a vital element in suspenseful plot lines. We have repeatedly seen AI systems in such stories that go utterly berserk and the human hero must brave devious threats to reach an off-switch and stop whatever carnage or global takeover was underway.

Does a kill switch or red button really offer such a cure-all in reality?

The answer is more complicated than it might seem at first glance. When a complex AI-based system is actively in progress, the belief that an emergency shutoff will provide sufficient and safe immediate relief is not necessarily assured.

In short, the use of an immediate shutdown can be problematic for myriad reasons and could introduce anomalies and issues that either do not actually stop the AI or might have unexpected adverse consequences.

Let’s delve into this.

AI Corrigibility And Other Facets

One gradually maturing area of study in AI consists of examining the corrigibility of AI systems.

Something that is corrigible has a capacity of being corrected or set right. It is hoped that AI systems will be designed, built, and fielded so that they will be considered corrigible, having an intrinsic capability for permitting corrective intervention, though so far, unfortunately, many AI developers are unaware of these concerns and are not actively devising their AI to leverage such functionality.

An added twist is that a thorny question arises as to what is being stopped when a big red button is pressed. Today’s AI systems are often intertwined with numerous subsystems and might exert significant control and guidance over those subordinated mechanizations. In a sense, even if you can cut off the AI that heads the morass, sometimes the rest of the system might continue unabated, and as such, could end-up autonomously veering from a desirable state without the overriding AI head remaining in charge.

Especially disturbing is that a subordinated subsystem might attempt to reignite the AI head, doing so innocently and not realizing that there has been an active effort to stop the AI. Imagine the surprise for


Source - Continue Reading:


Related post