The HAL 9000 computer, and its AI software, clearly used machine learning to modify its programming. This is clearly a programming failure, and while it’s unlikely that the programming would specifically allow such an action, such an error also must not have been seen as contrary to mission success. Its programming didn’t include sufficient safeguards to prevent things like, say, killing your passengers. HAL didn’t view its actions as a mistake. No 9000 computer has ever made a mistake or distorted information,” HAL said as an introduction. “The 9000 series is the most reliable computer ever made. It’s also likely that the makers of the HAL 9000 didn’t think testing was necessary. So what happened? One reason is that testing an AI is complex. If you accept the premise that HAL was a mission-critical application, then there are actual lessons one can learn about ensuring important software meets expectations – particularly in an environment with a lot of unknowns. It’s equally obvious that thorough testing of the computer and its AI software had missed HAL’s homicidal bent.
That qualifies as a serious software failure – though, happily, a fictional one fortunately, in the real world, NASA does a better job. In the process, the computer killed all but one of the crew of Discovery, the space ship that HAL was operating, and it failed in its mission to discover more about a mysterious monolith. But obviously, the HAL 9000 computer went off the rails.
The AI should have obeyed the order immediately.
But if you’re a software QA expert, what you heard was an obvious failure in software quality testing. Most people only pay attention to the story. Dave Bowman, says to the voice-driven AI, “ Open the pod bay doors, HAL.” To which HAL responds, “I’m sorry, Dave, I’m afraid I can’t do that.”
#HAL 9000 BLOG MOVIE#
One of the most famous bits of movie dialog, one that’s become part of popular culture, is in the film 2001: A Space Odyssey.