A massive computer built with advanced tactical knowledge and programmed to mercilessly kill legions of humans in a global war. What could possibly go wrong there? Games like I Have No Mouth And I Must Scream and Outlast develop advanced A.I. to create some type of efficient weapon, while in games like Mass Effect it's done to fulfill the more spiritual need of storing people's consciousness. The dangerous portion of this research is that it always ends up creating a form of artificial sentience. At that point if you're fortunate they behave like the Geth from Mass Effect and just want to be left alone, but it seems to be more common that the software desires the genocide of everything around it. The only fortunate thing about this experiment type is that they don't seem to involve experimenting on live subjects with a few notable exceptions like the patients of Outlast. Why Does It Belong Here? Simply put this is the first experiment type with such massive destruction potential. The first two, while horrible for the subjects and anyone in their path, don't usually end up in any kind of massive genocide or force a species to abandon their home planet. Why isn't it higher then? The research itself is not as bad as it could be. As mentioned above it doesn't typically involve live creatures of any type. You could argue that the A.I. has achieved sentience and therefore counts as a live specimen in these experiments, but for the sake of simplicity let's leave the ethics out of it and place this type safely at number five.