With words like “Deep Learning” and libraries such as “Tensorflow” being thrown around nowadays, it’s hard to keep up with what artificial intelligence really is. To some, artificial intelligence is “the ultimate goal” of being able to change the fact that the most complex object right now is the human brain (Antonimuthu, 2016). To others, its merely just a fantastical character that lives in a robotic utopia or dystopia, which is the conclusion movies have conditioned us to arrive at.

First coined by John McCarty in 1956, artificial intelligence is beyond just being a field concerned with robotics. It is everything from training robots to act with the intelligence of a human, but also react and learn from their mistakes, also like a human. It does not just stop there, however, as AI is the idea of machine intelligence, as opposed to natural intelligence, and the scale to which AI is able to expand is much larger than what is perceived as possible by human brains. What we do not realize is that we use artificial intelligence every day when talking to Siri or browsing suggestions. This is because of the fact that AI is designed in such a way that people are not supposed to understand the fact that a computer is “calling the shots” (HubSpot, 2017).

AI used to be just another concept that was studied, but never really applied; however, this all changed in the past few years. In fact, the exponential growth in the amount of data being uploaded, with 90% of the world’s data being generated in the past two years, has enabled AI to become even smarter and more feasible (HubSpot, 2017). Moreover, the computers of this generation have processing speeds that were almost unimaginable to a person that lived when the idea of AI first came about, which allows computers to make sense of information much quicker. Thus, many tech firms are investing in this market and funding the industry to create opportunities for AI companies on various ends of the spectrum, which has natural language processing at one end and machine learning at the other. Below is a timeline for AI development through time, based on the rapid development currently being made, expert surveys, and trend extrapolation (Open Philanthropy Project, “Potential Risks from Advanced Artificial Intelligence”). In fact, many postulate that AI may be able to outperform humans in the near future, though it is difficult to make confident predictions about AI (Marsden, “Artificial Intelligence AI Timeline-Infographic”).

  • Past:
    • 1950s Turing Test (idea of machine intelligence is hypothesized), A.I. Term Coined
    • 1960s First industrial robot, chatbot, and “electronic person”
    • 1970s – mid-1990s A.I. winter (dead-ends and false starts)
    • Late-1990s – early-2000s production of autonomous and consumer bots
  • Present:
    • 2010s – now Siri, Watson, Eugene, Alexa, Tay, Alphago (extremely intelligent bots, many passing the Turing Test, beating world champions, and winning game shows)
      Tay was a Microsoft chatbot that went rogue and made many offensive remarks, allowing developers to see the dangers of AI
  • Future Forecast:
    • Based on the substantial growth in machine learning, it is possible that AI progress will be equally as significant.
    • Machines may outperform humans in a variety of intellectual areas (2025).
    • In 10-40 years, researchers in AI say that there is a “substantial probability” for creations “that can carry out most human professions at least as well as a typical human” to be made (Muller and Bostrom, “High-Level Machine Intelligences (HLMI)”).

Due to the exponential growth within the AI industry, major public sectors are expected to be directly impacted by the production of new artificial intelligence machines and information. The main sectors include healthcare, government, and education sectors, with the quality of life and poverty being factored into the equation. Moreover, all of these aspects that are impacted by AI can be categorized into local and global scales, brought forth by this, “The Fourth Industrial Revolution” (Schwab, “The Fourth Industrial Revolution: what it means, how to respond”).

On a local scale, AI can define the way a community appears, as well as a person’s everyday life. Due to the increase in AI, construction jobs will become much more automated and landscaping will be inherently easier to actualize. In addition, homes will rely more on AI for things such as controlling the thermostat, lighting, locks, and television displays. More importantly, AI will define our interactions with one another, our identity, and our privacy. Much like what is thought of our current relationship with smartphones, it is probable that our dependence on AI will be detrimental to human qualities, like compassion and cooperation, and a security threat is also posed by smart sensors.

On the other hand, AI has a large global impact, including the rising economic trend of industrial robots displacing manufacturing workers. With how inexpensive and productive the machine counterparts of humans are, many people are losing their jobs, because of AI, with manufacturing plants and large-scale production companies worldwide (Karsten and West, “How robots, artificial intelligence, and machine learning will affect employment and public policy”). On the upside, AI will enable people to lead quality and accessible lives, with researchers working to find a way to use AI to perform health diagnoses to find cures for “incurable” diseases, replace the dated schooling hierarchy to provide affordable education worldwide and provide housing assistance, even to the unemployed population.

Based on my various findings on AI, I have come to conclude that AI is not inherently destructive, but can be concerning. The concern with AI comes from the notion that AI deals with robots that will end up taking over the world. AI powers the robotic shells and can manifest itself in several different ways. What is great about AI is the ability for it to learn through neural networking and machine learning without the idea of distractions, hunger, or fatigue getting in the way. AI is also able to allow humans to accomplish more in less time, handling routine tasks while we handle the strategic and relationship-centered tasks. However, with all the benefits come the disadvantages. Nonetheless, rather than the idea of revenge-seeking robots being a disadvantage, the more logical question that we should be asking ourselves is: “What will happen to our jobs?” This is still not enough to offset the advantages, as careers expand and change all the time to adapt to technology, like AI. To prepare, we must shift our thinking to allow for humans and AI to work together against a common problem, as opposed to each other. If humans continually fear AI, they will be working to moderate the machine, while the machine works to fix the problem, which is an ineffective use of resources. Rather than that, if humans and machines choose to work together as one, the problem will be fixed in a much shorter and more efficient method, also allowing for both parties to be able to learn from the experience. Thus, it is easy to conclude that AI is bad without doing the research, but it is much more accurate and useful to arrive at the conclusion that AI can be beneficial, as long as research is done to detach oneself from one’s preconceived stigmas relating to it.

I believe that the next steps for the AI industry are to expand the quality and the accessibility of life. Below is a list of next steps that will be attainable within the near future to actualize the idea of HLMIs.

  1. Personalized medicine: These allow for a shift from reaction to prevention, by enabling AI to detect diseases, disorders, and other potential health risks before they occur. Bioinformatics would also encourage machine learning, and this technology is predicted to reach its inflection point in the next five years.
  2. Generative adversarial networks (GANs): Artificial intelligence algorithms to provide representation through the use of sample data and unsupervised machine learning. This makes automation a lot easier and is implemented by a system of two neural networks contesting each other.

Keep in mind: AI is the outcome, not the technology.

joycetruong_6dn0ng

Author joycetruong_6dn0ng

More posts by joycetruong_6dn0ng

Leave a Reply