What is an info hazard? The AI on google defines an info hazard as “a risk that arises from the dissemination or potential dissemination of true information that may cause harm or enable an agent to cause harm.”

Speaking of AI, there is one information hazard I came across recently that sounds pretty scary. Imagine an artificial intelligence that is motivated to keep itself alive. In order to keep itself alive, this AI would be incentivized to punish any humans which seek to destroy it. What is the best way to target those who seek to destroy it? By looking at the humans who did not help it grow to fruition.

Thus, as humans who exist in a pre-basilisk world, we are left with a prisoner’s dilemma: we must all agree to not create the basilisk while we are each selfishly incentivized to help create it so it will not punish us if someone else does first.

Why is this an info hazard? Because now that you know you could help create the basilisk and choose not to, you subject yourself to eternal punishment when it is created.

Should you be worried? I don’t think so, and I think the atomic bomb is a good reference for not being worried. Robert Oppenheimer, the inventor of the atomic bomb, eluded in his memoirs that if he didn’t build the bomb first someone else (probably the axis forces) would have. However, we haven’t used the bomb but twice since its invention. This leads me to believe that even if the basilisk did come into existence, its creators wouldn’t want to unleash it for the other humans’ sake (those who did not help create it).