AI Technology
The science fiction industry has been posing the question for decades: what happens when our technology can think for itself?
Does it make everything easier, or is it something that could be dangerous? According to CNET, some of the greatest minds in the science and technology fields, like Stephen Hawking, Bill Gates, and Elon Musk are answering, “both.”
I’ll Be Back!
This is a subject that has seen a lot of attention, from movies like “The Terminator” series, “The Matrix” series, and “I, Robot” to books like Ghost in the Shell and 2001: A Space Odyssey, and more.
And AI technology is no longer something that may happen; it is something that already exists in more primitive forms, like Siri and the upcoming Cortana personal assistant from Microsoft. And researchers are already developing ways that machines can teach themselves by accessing data through our available abilities to connect our gadgets to each other.
Safeguards
Gates, Hawking, and others believe that safeguards need to be put in place while people still have control of AI. The way Hawking put it, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” As AI evolves and becomes more functional, it also becomes more powerful.
Earlier this month, a growing list of researchers and professors in the field, including Musk, signed an open letter that proposed that safeguards be put in place during the initial phases of development, like research, in order to make sure that humans don’t lose control of the technology they are developing.
If you have any questions on this interesting subject or have any IT needs, please contact Enstep today!