“We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely,” OpenAI writes in its first blog post, published just a few moments ago. The goal? Make the scope of A.I less narrow. Right now, machines are either good at identifying people, or answering questions, but not both. But the ultimate goal in A.I research is to “generalize” intelligence—have an algorithm that can do it all. In pursuit of that, OpenAI’s founding team hired Ilya Sutskever as research director. Sutskever is currently a research scientist at Google who has worked with some of the most well-known names in A.I. and machine learning, like Geoff Hinton and Andrew Ng (who work with Google and Baidu respectively). The organization is a non-profit, and only hopes to spend a small fraction of their billion dollar seed in the next few years. They hope to “freely collaborate” with other institutions, which makes sense, as nearly everyone on their research team comes from a prestigious institution like Google, Stanford, and New York University. Musk’s involvement in particular is noteworthy, given the SpaceX founder has previously expressed fears that artificial intelligence could be more dangerous than nuclear weapons. OpenAI would appear to be in part an effort to power-check the development of A.I. going forward. Developing…