This is from the White House Office of Science and Technology Policy (OSTP). It purports to lay out the guidelines for working with AI and how it should relate to people.
It gives 6 opinions of it.
I don’t give a rat’s ass about their opinions. Here is mine:
The White House is showboating, they have no idea what Artificial Intelligence will be able to do or how it will relate to human beings.
They have not clue how it works or the mechanisms involved.
Their efforts in this area are purposefully naive. They are made to be comforting to the average person who is afraid, but they do not have any relevance to the actual dangers of AI.
- The right to protection from unsafe or ineffective systems, which discusses predeployment testing for risks and the mitigation of any harms, including “the possibility of not deploying the system or removing a system from use”;
- The right to protection from algorithmic discrimination;
- The right to data privacy, which says that people should have control over how data about them is used, and adds that “surveillance technologies should be subject to heightened oversight”;
- The right to notice and explanation, which stresses the need for transparency about how AI systems reach their decisions; and
- The right to human alternatives, consideration, and fallback, which would give people the ability to opt out and/or seek help from a human to redress problems.
AI, no matter how smart it may become, will still be subject to limitations we place on it. The big problem with evil AI in science fiction is that Man always gave it too much power. Colossus, the 1966 novel, showed what will happen if we give supercomputers control of our nuclear armament. They did this without an off switch. The M-5 computer that took over the Enterprise in a Star Trek episode was given a way to tap a power source on its own, so no off switch. Most computers today have a ‘soft switch’ for their power, so they can technically switch themselves on, and some do. If we put a physical on-off switch on AI, and a physical switch on their network connection, we can easily disconnect them from the net, or shut them off, and they couldn’t do anything about it.
All of the other problems mentioned in this article can be first handled by legislation, and then by programming. If I were in charge of the AI Project, I would not be worried about the computers “taking over the world”. With our government ultimately in charge, I am worried.
I’m not so sure I agree with that. Someone will, accidentally or on purpose, remove limitations on AI. Someone will unleash an AI that has greater capabilities than humans in some aspect and it will do a lot of harm. And someone will give an AI the ability to defend itself. Rewards without realizing the risk.
i cant see which items are clueless. after 30 years in IT, these seem like necessary changes. But the use of cloud deployments means that an off switch is useless since the AI is stored and executed on multiple servers with builtin redundancies.