Without giving too much away, the move takes a vein similar to Mary Shelley's Frankenstein, and BladeRunner/do androids dream of electric sheep. Namely it suggests (although open ended) that it is we humans that determine how the potential of AI will influence the world.
I guess one of the interesting questions that the movie (and the wider genre) brings up is; if you found a functioning AI that had the potential to one day become dangerous, would you feel obliged to destroy it to protect yourself, obliged not to destroy it because it was sentient, or that there was no obligation either way? And is there any difference between destroying and AI compared to a human?
I also thought it was just really good at putting a message across as a movie. It was quite original, not a sequel or port from a book or comic (that I'm aware) and it didn't beat you over the head with its message until you cringed. Totally worth watching if you've got a few hours