Social Networking
A.I. & Work
Trademarks of Pseudoscience
I'm very reluctant and almost ashamed to even link to this article in the The Register, it doesn't deserve your attention, but I'll write about because of the relationship with my main field of research (Creativity / A.I.) — and because it's a prime example of pseudoscience in action.

This guy is apparently answering a recent interview with Stephen Hawking. In this interview, Hawking suggests that we must advance brain-computer interface technologies (connecting our brains directly to computers) so that artificial brains of the future contribute to the human intelligence rather than oppose it (interesting to hear that Hawking is thinking along these lines). The author of the rebuke, Thomas Greene, in a borderline barbaric manner, blatantly claims that Hawking is an idiot and that human-level A.I. will never be possible.

Now, I've had a saying for many years, which is: If you believe in the Theory of Evolution, you believe that the brain is a machine. Machines can be replicated, hence you believe that human intelligence can be replicated in machines.

Mr. Greene believes in evolution. However — he introduces an interesting (or not) twist: He believes that humans encompass a certain quality that machines can't acquire. He calls it "irrational insight" — which we (humans) "mainly exhibit in religion, art and literature". What he's referring to is creativity, more or less — and that computers are too logical to replicate this feature. His actual point is irrelevant and I'm not going to waste my time answering his pseudoscientific arguments (which he has more than a handful).

This dude exhibits and combines three commonplace intellectual fallacies, the trademarks of pseudoscience:

(1)
He assumes we know enough to know what we don't know. i.e. that human level intelligence can only be brought about by natural evolution, and not by any other process.

(2)
He takes a concept that we don't fully understand yet (e.g. insight, creativity, emotions) and announces that it's impossible to replicate. Even though he doesn't really know how it works or what made it come about.

(3)
He draws concrete assumptions about scientific unknowns in our world, venturing instantly into religious territories.

There's also a fourth, more annoying than interesting fallacy: most of the concepts he mentions are very ambiguous and ill-defined, communally obfuscating the pseudoscientific nature of his arguments. This makes the whole article very hard to counter-argue in a sensible manner, and regrettably will cause some poor souls to actually buy into it.

My last words are simply:
Beware pseudoscience.

|
A.I. Systems Integration
A chart illustrating A.I. integration.
After reading up on the A.I. information on Wikipedia, I felt it was completely lacking info on AI systems integration, so I decided to write a new article. Integration is a field I find very exciting, and most definitely a large part of future systems.

You can check out the article
here — it's a snapshot of what the page looked like right after I wrote it, in case you're looking at this in the future when the article has been edited to shreds. I made the pic on the side here especially for that article. I'm releasing it under a Creative Commons license.
|
The World's First A.I. Radioshow Host
Picture of Superhumanoid's body
After little or no A.I. & work related blog entries, I thought I'd report a bit on the A.I. lab's project "SuperRadioHost" — the world's first fully autonomous radioshow host.

The project is a product of The
Center for Analysis and Design of Intelligent Agents, Reykjavík University's artificial intelligence laboratory, and in fact, Iceland's first A.I. lab. But while the lab is young (2 years), there has been very a sharp rise in both the amount of research projects and the number of students since it was founded, and CADIA has become RU's most active and prominent research laboratory.

SuperRadioHost Image
A promo-picture
for SuperRadioHost
Amongst some of the excellent projects at the lab, SuperRadioHost is the first phase in a series of human-like artificial intelligence projects called The CADIA Superhumanoids (the picture above is of the Superhumanoid Body, which I designed in 2005 for the Superhumanoid project). The explicit goal of the SuperRadioHost project is to create a fully autonomous, artificial personality to manage it's (his) own radioshow. And we're not talking about just reading the names of the songs before they're played (although that's probably part of the deal), but a talkshow — with live interviews. Yes, SuperRadioHost is intended to call up interviewees through phone and carry out realtime conversations — how cool is that?! To be a bit more specific, these are some of the goals that have been made public:

:: Task, action, sentence and speech planning
:: Dynamic, highly flexible sentence understanding, generation and turntaking
:: Ability to interview humans
:: Ability to interrupt human speaker, and be interrupted


Technical Info

OpenAIR logo
The SuperRadioHost is being built by principles of the Constructionist Design Methodology (CDM), a methodology for creating interactive, artificial intelligences. In short, the CDM advocates a modular approach to A.I., i.e. having a number of specific-function modules, or program parts, produce the overall large-scale behavior of the system. The components communicate through the Psyclone AIOS, an advanced blackboard system for A.I., and the OpenAIR message protocol.The SuperRadioHost is currently using 12 networked desktop machines for various speech and planning processes, along with two workhorse computers for speech synthesis and input pre-processing.

Launch Date

The SuperRadioHost is still under development at the lab, but was recently displayed for the first time at a public science fair in Reykjavík, Iceland. Needless to say, the project made quite an impact within the Icelandic media; A frequently asked question was whether this would render human run radioshows obsolete, with radioshow hosts calling one after the other to ask about this new inhuman competitor (humanoids killed the radiostar?). Well, I guess we'll find out
next year when SuperRadioHost, whose show will be called "Radioactive with SuperRadioHost", is scheduled to get his own public, national radioshow in Iceland.

Related Links

:: CADIA
Reykjavík University's A.I. Lab
:: SuperRadioHost
The project page at CADIA
:: Mindmakers.org
An online organization for collaboration on large scale A.I. systems (CDM related)
:: Vélaldin Emergence Engine
My research software project at CADIA
|
The A.I. Lab Revisited
These past few weeks I've been busy working on finishing my NSN (Icelandic Student Innovation Fund) project, Vélaldin, as well as maintaining headway at school. Only to be rewarded with a bad cough and fever.

But sickness, as annoying as it can be, has it's upsides. Thank sickness for the log-entry, for example.

I thought I'd post some new pictures of the A.I. lab, as
the last ones were from before everyone had settled in. I'm still in the process of migrating from home to the lab. As can be seen on some of the pictures, my desk is quite empty. It's hard to leave my home setup alone, as I've been growing accustomed to it for the past two years. Then there's the question of data transfer — most of my project include many dozens of different files and formats. I've been using FTP for fetching data to Serafin (my home machine) when I'm working at the lab, which works okay when I have software at the lab to read it. Guess it will take some time to make the new workstation worthy of perpetual use.

Here are the pictures, they're all taken on my phone — not the best quality.


A picture of my desk at CADIA
Here's a picture of my desk. Those walls are nice and new, but they made the workspace 2x smaller as well as making it very difficult to socialize with people on the other side.

A side picture of my desk and workspace
Here's a better view of the table-cluster my workspace is part of.

The North East corner of the lab
Here's a nice view of the north-east corner of the lab (facing away from my desk) Still some boxes waiting to be cleaned out.



A view to the south at the lab
A view to the south.

The logo for the botcave
I have named the lab's project room "The Botcave" — and stuck this nice logo on the door.

I wonder if the next Superhumanoid will look like this:1960's batman



If you want to see more pics of the lab, there are more photos in my older post from before everyone moved.


|
ISIR Preparation Era Complete
ISIR_sidetext_smaller
Yesterday I turned in the ISIR report to the European Union. The 33 page booklet covers the year since I founded ISIR, and marks the end of it's infancy.


IMG_2170
Doing the formal opening speech for ISIR
at the A.I. Festival 2006
The preparation period for ISIR (Icelandic Society for Intelligence Research) was a long and interesting one. The formal support of the EU ended with the society's official opening on April 29th 2006, at Iceland's first A.I. Festival. Reviewing what we've accomplished and compiling it into a report borderlines the surreal — 33 pages hardly do it justice.


Picture 11
Pie chart showing ISIR share in Icelandic A.I. information on the web

Of course, none of this could have been accomplished without the hard work of my fellow board- and founding members. In conclusion, here's an overview of what I consider some of the milestones in the making of ISIR.

:: Well over a hundred web-users and 32 formally registered members
:: 289 pages of free A.I. related information online, doubling the total amount of information available
:: Many dozens of posted A.I. news
:: Two A.I. seminars for the public in collaboration with CADIA
:: Iceland's first A.I. magazine (available online in PDF)
:: An A.I. festival in which the majority of Icelandic A.I. companies participated, and an estimate of over 500 people attended


ISIR Mainpage :: http://www.isir.is
ISIRWiki :: http://wiki.isir.is
ISIR Forums :: http://forum.isir.is
|
Sneak-peak at the New A.I. Lab
For some time now, Reykjavík University's Computer Science department has been on it's way to move to it's own building, across the street from RU's mainbuilding in Ofanleiti. What will be our new residence is the Morgunblaðið's old house — Iceland's largest newspaper, which recently moved to the outskirts of Reykjavík.

Of course, what this means is that CADIA and it's residing labrats will get new headquarters! Due to issues that require my constant attention here at home I haven't been up at the lab this summer, but today I went there to help out with packing up the old place. Along the way I managed to get a guided tour of the new house, and man, the place is great! Unfortunately, I didn't bring a camera with me, but I did have my phonecam — so here are some photos that I took, both of the old lab and the new.


CADIA_oldPlace2
Here's an overview of the old lab, or really just the stuff we were moving. Desks and workstations are on the left and not shown in picture.


CADIA_oldPlace
Jonheidur fighting the stack of boxes at the old lab.


CADIA_agustAndJonheidur
Jónheiður and Ágúst contemplate which boxes to move next.


CADIA_newHouse
Ah, now here's the house we're moving into. Looks very futuristic, I must say — and I can definitely see a lot of futuristic projects taking place in there.


CADIA_stairs
Getting closer now, here we are at the first floor. The stairs lead up to the new lab.


CADIA_southOverview
Ladies and gentlemen — the new A.I. Laboratory!
The is an overview taken from the northern end. The circular tables there on the left side of the image are basically in the middle of the lab. The lab's space is enclosed by an array of great art-deco desks, as we can see on the right side of the image and in the picture below.
CADIA_SouthernTables

CADIA_tableCloseup
Couldn't resist taking a closeup of the desks.


CADIA_drawerCloseup
And of the drawers. Although an enthusiasm not shared by all the labrats, I really like the design.


CADIA_northernEnd
And here is the northern end of the lab. The glass room you see there will be the "Project Room", i.e. the room where the robots come alive!


CADIA_viewToNorthProjectRoom
This is inside the Project Room, the room's northern glasswall there.


CADIA_projectRoom
View to the south inside the Project Room. Ágúst, fellow boardmember of ISIR in the doorway.


CADIA_northwestCorner
Here's a better view of the desks, a table in the northeastern corner of the lab.


CADIA_computerRoom
Finally, we have a preview of the-soon-to-be-computer-room located directly below the new A.I. Lab. At last we labrats will be able to work in peace from the whirring of clusters.


All in all, I was thrilled after my visit. The place is absolutely fantastic and I look forward to working there.
|
A.I. in Smalltalk
A.I. in Smalltalk 2006©Thórisson
In most conversations, people tend not to ask questions about what I'm doing beyond that of the name, which is usually followed by a nod and a change of subject. When it comes to computers, and especially artificial intelligence — it seems generally not be considered good conversation material. The process is depicted in figure 1. In the second frame, note the totally blank expression on the friendly persons face while he nods, in comparison to my intense, excited smile of joy at the prospect of an interesting conversion about work.

Governed by a concern for my own health, I've decided not even to contemplate the possibility of someone finding science uninteresting; so the closest logical reason is that many think they wouldn't understand it. I'd already written a hefty overview explaining what I do and why I do it, when I realized that the text was a bit on heavier side for a weblog, and that upon reading half of it, most people would probably act as depicted in frame 3 of figure 1. Instead, I made this entry a Normality Certified™ account of my experiences of A.I. in smalltalk1.


Now, first, let me point out that of course A.I. isn't smalltalk material — because it generally requires more than a "small talk" to even reach an agreement on how to define the words being used in the conversation (e.g. defining "intelligence"). In this aspect, small-talk is closely related to small-thought — so my frustration doesn't really derive from no-one wanting to talk about A.I. any more than the fact that people generally don't like thinking.


Normality_Certified

Overall, I've heard many — but here are selected responses that I frequently receive when my work is mentioned.


"So I'm going to have to kill you before you create
the robots that take over the world?"

Regular reply from those familiar with certain movies. Usually stated in a sarcastic tone, sometimes implying that A.I. belongs in fairytales (robotales?), with a subtle undertone of fear that it won't. There are many different versions of this response, but they all mean the same thing with their references to Terminator and the Hollywood idiosyncrasy.

"Have you created anything really intelligent?"

An unfortunate thing about this question is that there is no good answer to it if you're keeping things smalltalky, except just saying "Yes" if you think you have, or "No" if you haven't. Any attempt at intelligent answers like "Well, that depends on how you define intelligence" will often cause a smalltalker to think you really haven't, but that you're trying to find a way to make it seem like you have.

Note also the emphasis on "really" — the word "artificial" seems to be generally interpreted as a synonym to "fake". Fortunately, comparing artificial intelligence to artificial fabric works very well, as most people do realize that artificial fabrics are real.

"Oh, so ... computers?"

Chances are that people of the analog generations use this reply.

"What?"

Elderly relatives' reply. Either (a) the person wants you to explain it, then starts talking about the weather, or (b) she doesn't, and then starts talking about the weather. Both events a and b are preceded by the person realizing the association of A.I. with Arnold Schwarzenegger.

In the end, I'm generally ok with any response I get. We're all different, and I respect peoples choices and interests. But I have one more tale to tell in this context. The other day insomnia struck again — I had little choice but to put on a DVD from our Friends collection. It was then when I had a very disconcerting reminder of what the majority of people think of science.

So in this scene, characters Ross (David Schwimmer) and Chandler (Matthew Perry) were "hanging out" when Ross mentions "I just finished this fascinating book. By the year 2030, there'll be computers that can carry out the same amount of functions as an actual human brain. So theoretically you could download your thoughts and memories into this computer and live forever as a machine".

But then the punchline came: Chandler pretended to doze off with the appropriate satirical snoring sound, followed by that horrible studio-laughter. Now, I like Friends, and I like Chandler, but I didn't laugh this time — I was just plain bummed that Ross couldn't talk more about this book!

Let us examine what this made me think at the time:

(1) Normal people's ideas of interesting conversations do not involve science, and usually not events further away than a week (unless it's a concert).
(2) I'm not a Normality Certified™ individual


In point 1, I'm not referring to the conversation of the fictional characters Chandler and Ross. I'm talking about the very-real people that watch the show and probably rolled around on their nacho-covered floor laughing their asses off over the absurd notion of the mention of something "so obnoxiously boring" — which was exactly what the very-real scriptwriters knew, and were planning on happening.

See definition on Normality Certified™ below for reference on Point nr. 2. But the feeling I got watching the scene was both eerie and relieving. I'll leave it up to the respected reader to hypothesize why.

So, in conclusion — draw your own conclusions, and find the sarcastic hidden message in this article2.



1. That's generally called being codependent, but writing a public weblog no-one wants to read kind of makes it being public pointless.
1b. Normality Certified™ means that the contents of the product or products do not break the borders of normal, although the Institute of Norm makes no guarantees that your sense of normal is normal enough to perceive the contents as normal.
2. Yes. There really is a hidden message. Hint: It's spread over the whole article.


|
Please note: I strongly recommend not using Internet Explorer to view this page.