Tuesday, January 28, 2020

Should We Care About Extinction/The AI Hilbert Problems

A lot of blogs and class discussion have been about AI. It is always stimulating to me to discuss what the future might look like given our current trajectory as a species. Sometimes though, things take a darker direction and I wonder: what if the ways we are threatening our existence aren't such a big deal? I mean, if destroying the planet or surrendering ourselves to the power of AI is where our evolution has taken us, how concerned should we be? Natural selection, right? Species go extinct every day. If we can't (or won't) fix our problems, then that settles it - we lose! Can self-inflicted annihilation be our natural course?

My retort would be that these issues do not only affect humans and that we maybe should not act like they do. I am not sure what moral responsibility we have for everything around us like plants, animals, and other natural features, but it seems like we should not be so selfish and it also seems like humans are capable of innovating safely.

I made a few Google searches on this idea of humans absolving ourselves from various threats to our species. While I did not find anything directly related to this concept, there are clearly prominent public figures who vouch for lengthening our existence. For instance, TIME's most recent Person of the Year is an environmental activist concerned for our future and presidential candidates are emphasizing the importance of environmental sustainability.

Also, a few years ago, popular thinkers Elon Musk and Stephen Hawking supported a Hilbert-esque list (it contains exactly 23 items, just like Hilbert's) laying out guidelines for how to approach the development of AI in the future. The guidelines are broken up into three parts: Research issues, ethics and values, and longer-term issues. In each category, there is a motif of responsibility. Humans on the aggregate are responsible for how we develop AI and, it seems, cooperation will need to extend beyond countries' borders in order to safely develop AI. While there may seem to be pessimism regarding AI, it could be something that unifies people instead. Of the items on the list, I like number 19 which states that we should not make assumptions about AI since we cannot know its full capabilities. While the list is helpful, I think it will be hard to follow every principle because new issues will arise that we have not yet considered; the list may need to expand.

3 comments:

  1. It's a bit embarassing if you look at the fossil record when it comes to other hominids. The average hominid in the fossil record lasted about a million years before going extinct. How long have humans been around? A few hundreds of thousands of years at most. And how are we most likely to go out? By our own hand. I feel like we're the punchline of a joke...

    ReplyDelete
  2. I agree with Andrew. We think that we are so successful, but the dinosaurs ruled the earth for hundreds of millions of year and then poof, were gone. Over and over species have come and gone. Why do we think that we are going to be any different? Cockroaches have been the only thing that keeps surviving. I'd bet on them outlasting humanity. If AI's supplant humans, they are still going to be dealing with cockroaches.

    ReplyDelete
  3. I have come to the conclusion that humans regularly and even frequently come close to the brink of killing ourselves. I believe that it is only when faced with dire circumstances that we truly innovate a solution to whatever problem there is. In short we run into every door first before we attempt to open it.

    ReplyDelete

Animals During Eclipses

As we have talked about the General Theory of Relativity, the significance of the solar eclipse has come up since it gave accuracy to Einste...