Game thinking from Adam Clare

Tag: AIPage 2 of 5

Another Quick Glance At Artificial Intelligence

Artificial Intelligence (AI) is becoming better with every passing year and thus more interesting. I have no idea what the state of AI will be in years to come but for now, this is some noteworthy stuff for game makers.

Emergent AI

“You work for it” when you create emergent AI. in This talk Ben Sunshine-Hill explores what it’s like to create and work with emergent artificial intelligences.

To hear Sunshine-Hill tell it, you should aim to design AI that behaves just like people and creatures in real life do, and that means you shouldn’t rely on “emergence” as a crutch; you should know exactly why your AI does what it does. At best, players should find your AI believable — not surprising.

Artificial Intelligence Research in Games

The first in a multi-part series of public lectures on AI in games. Recorded on 20th October 2014 at the University of Derby.

In this first video, we detail some of the most interesting work in using video games as benchmarks within the AI research community. This is largely focussed on four competition benchmarks:

– The Ms. Pac-Man Competition
– The Mario AI Competition
– The 2K BotPrize
– The Starcraft AI Competition

What if AIs are more trustworthy than humans?

In this keynote session, Bitcoin developer Mike Hearn talks on the topic ”Fighting for the right to be ruled by machines”. He outlines a possible scenario over the next 50 years, in which an ever worsening political situation results in some people deciding that only computers/robots can be trusted to control the critical infrastructure of society (cars, planes, mobile networks, legal systems etc) and therefore that the people currently in charge of them need to be evicted from those positions of power.

If all of this talk about artificial intelligence gets you thinking then you should check out the Experimental AI in Games workshop at AIIDE 2015 which is just a few months away. Their accepted papers include Would You Look At That! Vision-Driven Procedural Level Design and An Algorithmic Approach to Decorative Content Placement.

Previously I posted about other conferences about artificial intelligence.

We Need to Rethink How We Approach Artificial Intelligence

A computer scientist from the University of Toronto, Hector Levesque, is calling on fellow researchers into the field of artificial intelligence (AI) to drastically re-evaluate what they are doing. If we are going to continue research into the field of AI we need to change how we measure success. The popular way most AI software is evaluated is through something called the Turing test which is essentially a test to see if an AI can convince a human that it is also a human being.

Last year Wired wrote about how to pass the Turing test:

Just using the Turing test as a measurement is a narrow focus on what intelligence means, which is one of my criticisms of the field. Levesque sees huge problems with this too, and furthers the criticism to point out that now people are developing software to only pass the Turing test without creating anything remotely intelligent. For an example of this just check out Cleverbot.

Cleverbot

There are other problems with the field too, like how computer scientists are trying to program an AI when there is no clear understanding what intelligence is. People working in psychology, linguistics, and philosophy would probably find it problematic with how programmed AI functions (and is built) on various levels.

Levesque’s critique is covered by the New Yorker and shows some other problems with the current state of computer science AI research. Interestingly, there are some simple questions that Levesque and others have come up with that can better discern the quality of AI. These are questions based around the complexity of the english language and what some may call “common sense” (I have issue with how they approach this idea of common sense).

It’s worth a read and provides a good synopsis of what’s going on in the field of AI research.

Levesque saves his most damning criticism for the end of his paper. It’s not just that contemporary A.I. hasn’t solved these kinds of problems yet; it’s that contemporary A.I. has largely forgotten about them. In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human beings possess. That’s a gargantuan task— “more like scaling a mountain than shoveling a driveway,” as Levesque writes. But it’s what the field needs to do.

You can read the full text of Levesque’s paper here.

Page 2 of 5

Powered by WordPress & Theme by Anders Norén