Alternatively, the movies Moneyball and Minority Report both utilize AI to make some sort of prediction. In the case of Moneyball, the prediction is how many runs a player contributes to in the game of baseball, and in the case of Minority Report, the prediction is if a person will commit a crime. In both cases, a single dependent variable is the output, which accurately reflects the machine learning process. This is the first place most people misinterpret AI. While AI is extremely powerful and can accomplish many things, it usually comes down to a simple question with a single answer that can be used to predict something within the same parameters, under the same assumptions. For example, if a machine hasn't "learned" a word before, it is unlikely it will be able to answer a question that includes that word. The robots in the movies mentioned earlier, would be a compilation of hundreds of thousands of algorithms in order to replicate human behavior (think about all the different, individuals things you do each day -- each of those would need to be its own algorithm).
Moneyball is the true story of a major league baseball team's use of data to help choose players given their tight budget. Minority Report, on the other hand, might not be considered a movie utilizing AI because the AI is a human with special powers. I feel it's still worthy of inclusion because the special humans make predictions, much like a black-box neural network. The biggest difference between these two movies is that in Moneyball a human is interpreting the results and making an informed decision, while in Minority Report there is no discussion about the prediction, just action. In Moneyball, the data scientist and general manager look at and discuss the analysis, making a decision about individual players given the prediction. In Minority Report, the name of a person about to commit a crime comes out of the black-box and police officers immediately arrest them.
Through this difference the two movies diverge, with Moneyball providing a positive interaction with AI and Minority Report leaving the viewer with a negative view of AI. I'm sure that was the intention of the latter, and perhaps a commentary on the scary possibility that a government could take action on incomplete or inaccurate information. No one would want to be arrested without actually having done anything wrong. There is also the issue that you don't know what goes into making the prediction, hence the term "black-box". Which brings up one of my favorite topics, confidence! Most predictions have a probability associated with them. How sure is the machine that a person will commit a crime?
I think it's the unknown, the black-box of AI that people are scared of. That's why it's important to be clear about the assumptions, independent variables, dependent variables, and confidence of an algorithm. This may not be obvious in all cases -- we can't tell people exactly what each node in our neural network does -- but we can try and interpret the results. Doing so is called explainable AI and has been a hot topic at data science conferences. The concern of not understanding or being able to control the algorithm is completely reasonable. One way to combat that fear is through data literacy, which is another big theme in the data space. While most people will not be experts in the data field, they should have a foundational understanding and be able to ask insightful questions of technology companies, news/informational sources, and algorithms used in their every day lives. Of course, through this understanding, individuals can vote and affect change based on their needs.
Want to learn more? I recommend the book Machines that Think for more examples of what's possible. And be sure to check out my blog post about books that are helpful on your data journey (Naked Statistics is particularly helpful in asking the right questions as a user of AI).
Oh, and sign up for my Data Book Club! It's a fun and motivating way to read about data topics, discuss them, and network.