Grioo.com   Grioo Pour Elle     Village   TV   Musique Forums   Agenda   Blogs  



grioo.com
Espace de discussion
 
RSS  FAQFAQ   RechercherRechercher   Liste des MembresListe des Membres   Groupes d'utilisateursGroupes d'utilisateurs   S'enregistrerS'enregistrer
 ProfilProfil   Se connecter pour vérifier ses messages privésSe connecter pour vérifier ses messages privés   ConnexionConnexion 

L'Intelligence artificielle revolutionne l'acces a l'info

 
Poster un nouveau sujet   Répondre au sujet       grioo.com Index du Forum -> Sciences & Technologies
Voir le sujet précédent :: Voir le sujet suivant  
Auteur Message
M.O.P.
Super Posteur


Inscrit le: 11 Mar 2004
Messages: 2939

MessagePosté le: Lun 14 Fév 2011 13:36    Sujet du message: L'Intelligence artificielle revolutionne l'acces a l'info Répondre en citant

Watson: une Intelligence Artificielle developpee par IBM pourrait bientot revolutionner la synthese de l'information, applicable dans les domaines plus divers comme l'assistance aux medecins et scientifques, l'analyse financiere, agent de call-centers, etc.

Arrow Watson sera oppose aujourd'hui aux meilleurs champions du jeu tele americain de connaissance generale: Jeopardy

Arrow Documentaire: The Smartest Machine on Earth


Arrow IBM Watson: les perspectives


_________________
La vie est un privilege, elle ne vous doit rien!
Vous lui devez tout, en l'occurence votre vie
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
Panafricain
Super Posteur


Inscrit le: 22 Fév 2004
Messages: 1124

MessagePosté le: Lun 14 Fév 2011 14:57    Sujet du message: Répondre en citant

Salut M.O.P merci pour le post. Je regarderai le documentaire dès que j'aurais un peu de temps. La question que je me pose vis à vis de ce programme qui s'appelle "Watson" est de savoir s'il s'agit d'une vraie révolution ou d'un coup marketing de IBM.

Je joue aux échecs, et je sais qu'aujourd'hui, il existe un bon nombre de programmes d'échecs qui sont plus bien forts que les meilleurs joueurs de tous les temps (Kasparov, Karpov, Fischer etc). Le fameux duel homme machines aux échecs n'a plus de sens en 2011 parceque les programmes d'échecs surclassent désormais les joueurs humains.

Mais la force de ces programmes aux échecs n'a rien à voir avec une quelconque intelligence. Déjà tout simplement parcequ'ils ne ne peuvent rien faire d'autre que jouer aux échecs, mais aussi parceque les progrès technologiques ont fait en sorte que la puissance des ordinateurs sur lesquels ils tournent leur permettent de calculer plus de positions qu'auparavant. Ces programmes ne jouent pas forcément mieux que les programmes développés auparavant, mais surtout calculent plus vite et plus loin.

IBM avait développé un programe qui s'appelait "deep blue" qui a battu kasparov à la fin des années 90 (1997) si je ne me trompe. La victoire de Deep Blue a été un énorme coup de pub pour IBM, mais quand Kasparov a demandé la revanche, IBM a refusé, préférant démanteler Deep Blue et encaisser les dividendes médiatique de sa victoire.


La question que je me pose suivant cet exemple, c'est donc : Watson marque t-il une vraie avancée dans l'intelligence artificielle ou bien est ce un programme qui joue très bien à Jeopardy ?
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
Jofrere
Super Posteur


Inscrit le: 01 Mar 2004
Messages: 1327
Localisation: Paris

MessagePosté le: Lun 14 Fév 2011 15:20    Sujet du message: Répondre en citant

salut Panafricain,
schématiquement un programme qui joue aux échecs ou à n'importe quel jeu est un arbre qui va contenir toutes les possibilités du jeux. Sur un jeux complexes tels que les échecs, du fait des multiples possibilités de déplacement des pièces, la combinatoire explose très rapidement.
La puissance des machines de plus en plus importante permet de traiter des milliards d'opérations par seconde mais il est absolument impossible je pense de se passer d'un algorithme d'IA extrêmement performant. Deep blue, je pense, a bénéficié aussi des recherches sur l'IA.
Sinon moi non plus je n'ai pas eu le temps de regarder le reportage mais je n'y manquerai pas.
_________________
Un ennemi intelligent est préférable
à un sot ami
agir en penseur, penser en homme d'action
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
M.O.P.
Super Posteur


Inscrit le: 11 Mar 2004
Messages: 2939

MessagePosté le: Lun 14 Fév 2011 19:12    Sujet du message: Répondre en citant

Salut Panafricain et Jofrere,
Le reportage repondra a la plupart de vos questions, c'est tout un nouveau paradigme qui rentre en jeu ici quand on compare Deep Blue, qui avait battu Kasparov aux echecs, a Watson.
Les differences commencent deja au niveau du probleme pose a resoudre.
Le jeu d'echec a des regles bien precises que la programmation peut appliquer 100 pour 100. Dans le cas de Deep Blue, l'approche fut simplement de la "brute force", avant chaque mouvement, il considerait tous les contre-mouvements possibles et comparait leurs consequences,
le mouvement avec l'issue la plus favorable etait choisi. Une telle approche est tres bien adapte a la structure binaire des ordinateurs.

Dans le cas de Watson et du jeu Jeopardy,le probleme pose est completement different, on a affaire au language parle, comprendre la question posee est deja tout un challenge, y trouver une reponse appropriee en un autre. Il est impossible de definir des regles pour toutes les combinations de mots, phrases et contextes possible.
Il a donc fallu trouver une autre approche: l'Apprentissage automatique ou Machine Learning,
une branche de recherche de l'Intelligence Artificielle qui a beaucoup evoluee ces dernieres annees et qu'on retrouve aujourd'hui appliquee dans beaucoup de domaine.

De maniere generale, il s'agit de permettre au programme d'evoluer en apprenant de ses erreurs passees. La machine analyse des donnees empiriques en cherchant des exemples d'utilisation des characteristiques qui l'interessent, realise une liste de potentiels, les classe selon certaines regles (statistique, probabilite, etc.) et donne la reponse la plus probable selon un certain poids.

Panafricain a écrit:
IBM avait développé un programe qui s'appelait "deep blue" qui a battu kasparov à la fin des années 90 (1997) si je ne me trompe. La victoire de Deep Blue a été un énorme coup de pub pour IBM, mais quand Kasparov a demandé la revanche, IBM a refusé, préférant démanteler Deep Blue et encaisser les dividendes médiatique de sa victoire.


La question que je me pose suivant cet exemple, c'est donc : Watson marque t-il une vraie avancée dans l'intelligence artificielle ou bien est ce un programme qui joue très bien à Jeopardy ?


C'est sur qu'ils visent un coup mediatique avec Jeopardy, mais c'est d'un autre registre que dans le cas de deep blue.
Deja une armee de ses scientifiques est impliquee dans le projet de pres ou de loin, c'est toute une nouvelle industrie a laquelle il s'attaque.

Par exemple Google-Translate et tous les nouveaux programmes de traduction qui voient le jour actuellement, sont basee sur le meme concept, ibm n'est pas seul ici. Un autre exemple quand amazon concocte des propositions selon ton profil en se basant sur d'autres utilisateurs avec le meme profil, il s'appuit sur le meme concept.

Avec les montagnes de donnees qui Asphyxient pratiquement tous les secteurs d'activites aujourd'hui, tous n'attendent que de telles solutions, des IA capables de t'assister dans tes analyses.
voir par exemple les possibles application dans la Sci-Fi: l'ordinateur avec lequel on communique et qui vous assiste dans les films Startrek ou encore HAL 9000 des films Space Odyssey
_________________
La vie est un privilege, elle ne vous doit rien!
Vous lui devez tout, en l'occurence votre vie
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
M.O.P.
Super Posteur


Inscrit le: 11 Mar 2004
Messages: 2939

MessagePosté le: Lun 14 Fév 2011 23:08    Sujet du message: Répondre en citant

How Watson works: a conversation with Eric Brown, IBM Research Manager



For nearly two years IBM scientists have been working on a highly advanced Question Answering (QA) system, codenamed “Watson.” The scientists believe that the computing system will be able to understand complex questions and answer with enough precision, confidence, and speed to compete in the first-ever man vs. machine Jeopardy! competition, which will air on February 14, 15 and 16, 2011.

We had some questions, so we spoke with Dr. Eric Brown, a research manager at the IBM T.J. Watson Research Center who works in several areas on DeepQA, including designing and implementing the DeepQA architecture, developing and assessing hardware deployment models for DeepQA’s development and production cluster, and developing algorithms for special question processing. Eric has a background in Information Retrieval and Computer Systems Engineering.

Q: IBM’s Watson and Jeopardy! FAQ states that IBM expects Watson to “reach competitive levels of human performance.” Could you quantify that, and is there a range of dates?

Brown: So, competitive levels of human performance at the task of playing Jeopardy! is what we’re talking about there. Just to establish the context of what we’re talking about, what we’ve been working on for the last four years is this core technology that we call “DeepQA,” for “deep question-answering.” The first application of that technology is the system called Watson, which was built and tuned to play Jeopardy!

And we’re basically using Jeopardy! as a benchmark or a challenge problem to drive the development of that technology, and as a way to measure the progress of the technology. So, in this particular FAQ, when we’re talking about reaching competitive levels with human performance, here we’re talking specifically about Watson being able to compete in a real game of Jeopardy! — live, in real time — at the level of a human competitor. And, in fact, even better, at the level of a grand champion.

The way that we’ve been evaluating that is, this past fall, Watson has been competing in live sparring matches against Jeopardy! players that have actually won several games of Jeopardy! and gone on to compete in the Jeopardy! Tournament of Champions, so we’ll call them champion Jeopardy! players. We basically have played 55 different sparring games against these champion Jeopardy! players, and built up a record of performance there.

Then we have the Final Exhibition Match, where Watson is competing against Ken Jennings and Brad Rutter, and that’s the match that will actually air on television, on Jeopardy!, on February 14th, 15th and 16th. So, it’s the results of these various competitions that will allow us to evaluate this claim, of whether or not Watson has achieved human-level performance at the game of Jeopardy!.

So now, to speak more broadly is a little bit of a challenge, because Watson is the first application of this technology that we’ve fully developed. What we’ve started to do is look at other applications of the technology that are more aligned with IBM’s clients and business interests, and business problems that our clients are interested in solving. And the first one that we’re taking a look at is applying this technology in the medical domain — using it to do what we call differential diagnosis.

Now, using the capabilities of DeepQA to automatically generate hypotheses and gather evidence to support or refute those hypotheses, and then evaluate all of this evidence through an open, pluggable architecture of analytics, and then combine and weigh all those results to evaluate those hypotheses and make recommendations — that’s where we’re going with this technology. I think what you’re starting to get at is, in the broader context of artificial intelligence, are we making claims about DeepQA at that level? At this point, I’m not sure we’re ready to make any claims in a broader context.

Q: So a related question is: How well would Watson do on the Turing test at this point in time?

Brown: Whenever I get that question, my initial response is: If you were to rephrase the Turing test slightly, and couch it in terms of, if you had two players playing Jeopardy!, and you couldn’t tell which one was the computer and which one was the human, and that was the Turing test, then I think Watson would pass that very easily. But, of course, the Turing test is defined more broadly and open-ended, where you really have an open-ended dialogue, and Watson is not up to that task yet.

Within the constraints of the Jeopardy! game, and the way that those questions are phrased — again, using open-ended natural language, Watson is able to understand a very wide variety of these questions over a very broad domain, and answer them with an extremely high level of accuracy. But what we haven’t worked on yet is more of a dialogue system, where Watson starts to interact in an even more natural way.

I will say, though, that that is definitely a direction we want to take the technology in, especially as we apply it to other domains, where if you think of a decision support system or something that is assisting a professional in their information gathering, analysis, and decision-making processes, you would go to a more interactive dialogue scenario. But it’s still not up to the task of actually passing the Turing test.

Q: What are particularly good examples of questions that Watson gets correct, and that show its ability to deal with subtle human-thinking capabilities, like irony, riddles, or metaphor?

Brown: Well, there are a couple of commonly reoccurring, what we call puzzle-type questions in Jeopardy!. One kind of puzzle is what’s called “rhyme time,” where the clue actually breaks down into two parts, and the system has to recognize that there’s two parts to the clue and each sub-part has an answer, and then the constraint is that the answers to the two sub-parts have to rhyme. An example of one of those kinds of clues is the category, “edible rhyme time,” and the clue is: “A long, tiresome speech delivered by a frothy pie topping.”

So there, the system has to recognize that there are two parts to this clue. The first part is a long, tiresome speech, and the second part is a frothy pie topping. Then, it comes up with possible answers for each of those sub-parts. So a long, tiresome speech could be a diatribe, or in this case (which is the correct answer), a harangue. And then, a frothy pie topping could be something like marangue, which is, in fact, the answer we are looking for.

But there are other frothy pie toppings, like whipped cream or things like that,so then you’ve got possible answers for each sub-clue, and you have this rhyming constraint, where now you look for the two words that rhyme. In this case, it’s marangue and harangue, and the final answer, using typical rules of English, where you put the modifier before the noun, you then rewrite it to be marangue harangue. So, the correct answer to the clue would be a “marangue harangue.”

Q: OK, what about metaphor? That’s a little bit more of a challenge.

Brown: Off the top of my head, I would probably have to dig a little bit for a good example of something like that. You know, what we do often see is that the clues often have what I’ll call extraneous elements to them, that are there more for entertainment value than actually being required to answer the clue. What Watson seems to be particularly good at is, picking through all of that and zeroing in on the relevant part of the clue.

In fact, for me anyway, I’m a terrible Jeopardy! player, there are some clues that are very complex, that I’m still trying to figure out what the clue even means by the time Watson has actually come up with the correct answer. Just because, there’s all of this additional wording in there, that you kind of have to pick through to actually get at what’s really being asked.

The reason that being able to do that is so interesting is because you can easily imagine real business applications where the questions that are being expressed, or the scenario that you have to analyze and solve, is not worded very crisply or concisely. Especially, say, a tech-support problem where an end user is just doing a brain dump of everything that’s gone wrong, or maybe giving you details that are completely superfluous, and you need to be able to pick through that and identify the core question. Watson seems to be particularly good at doing that.

Smarter search engines?
So would you see Watson moving toward serving as a back end to search engines in the future, and moving toward cloud computing?

Brown: Actually, it’s almost the reverse. Watson solves a different problem than a traditional Web search engine does. I think ultimately, both of these technologies will have a role in supporting humans in everyday information seeking, gathering and analysis tasks. Watson, or DeepQA, actually uses several search engines inside it to do some of its preliminary searching and candidate answer generation.

So, that’s why I say it’s almost the reverse — rather than a search engine using Watson, it’s already the case that Watson uses a number of different search engines in its initial stages of searching through all of its content to find possible candidate answers. These candidate answers then go through much deeper levels of scoring and analysis and gathering additional evidence, which is analyzed by a variety of analytics.

But the reason I want to make a distinction between the two high-level tasks of using a search engine, versus using something like Watson, is Watson or the underlying technology is really more general than “question in, answer out.” It’s really much more of a hypothesis generation system, with the ability to gather evidence that either supports or refutes these hypotheses, and then analyze and evaluate that evidence using a variety of deep analytics. For instance, natural language processing analytics that can understand grammar or syntax or semantics, as well as other, say, semantic-web type technologies that can compare entities and relationships and evaluate things at that level. All of this is used in more of a decision support system, so it’s not just straightforward search; it’s solving a different kind of problem.

Q: Would it be accurate to say that Watson’s capability is somewhat analogous to Wolfram Alpha, in terms of providing that capability?

Brown: I think that the Watson system that plays Jeopardy! has a lot of similarities with Wolfram Alpha at the interface level, in terms of a question in and a precise answer out. But, once you peel back the interface layer and actually look at the implementation, and the types of questions that each of those two systems can actually answer, things start to differ quite a bit. My understanding of Wolfram Alpha, which is not complete, is that it’s built largely on manually curated databases and a lot of human knowledge engineering, and then when it is able to understand the question, it can then precisely map that into searches or queries against its underlying data to come up with a precise answer.

Watson does not take an approach of trying to curate the underlying data or build databases or structured resources, especially in a manual fashion, but rather, it relies on unstructured data — documents, things like encyclopedias, web pages, dictionaries, unstructured content.

Then we apply a wide variety of text analysis processes, a whole processing pipeline, to analyze that content and automatically derive the structure and meaning from it. Similarly, when we get a question as input, we don’t try and map it into some known ontology or taxonomy, or into a structured query language.

Rather, we use natural language processing techniques to try and understand what the question is looking for, and then come up with candidate answers and score those. So in general, Watson is, I’ll say more robust for a broader range of questions and has a much broader way of expressing those questions.

Watson on an iPhone?
Q: Well, given that robustness, when do you expect it to be available to the general public — let’s say, be able to do provide what you get on Answers.com?

Brown: That is what we’re currently working on now, applying the underlying technology to a number of different business applications. I think there’s a couple of elements to that answer. The first question is: would IBM get into that kind of business? I don’t know if that kind of business model makes sense for IBM, so I don’t know if we have any plans there. Our focus, at least with this technology initially, is going to be looking at specific business applications, and as I said, right now, the first one that we’re interested in exploring is in the medical domain or in the healthcare domain. We really don’t have an estimate of when something like that would be available.

Q: Are you free to mention other possible applications?

The other applications that we’re thinking about would include things like help desk, tech support, and business intelligence applications … basically any place where you need to go beyond just document search, but you have deeper questions or scenarios that require gathering evidence and evaluating all of that evidence to come up with meaningful answers.

Q: Does Watson use speech recognition?

Brown: No, what it does use is speech synthesis, text-to-speech, to verbalize its responses and, when playing Jeopardy!, make selections. When we started this project, the core research problem was the question answering technology. When we started looking at applying it to Jeopardy!, I think very early on we decided that we did not want to introduce a potential layer of error by relying on speech recognition to understand the question.

If you look at the way Jeopardy! is played, that’s actually, I think, a fair requirement, because when a clue is revealed, the host reads the clue so that the contestants can hear it. But at the same time, the text of the clue is revealed to the players and they can read it instantaneously. Since in Jeopardy!, humans can read the clues, when a clue is revealed, the full text of the clue is revealed on a large display to the contestants, and that full text is also sent in an ASCII file format to Watson.

Q: Simultaneously with the reading of the clue, right?

Brown: Exactly. Basically, the time that it takes the host to read the clue, is the amount of time that all of the players have to come up with their answer, and to decide whether they want to attempt to answer the clue. Each player has a signaling device, what we refer to as a buzzer, and the signaling device is actually not enabled until the host finishes reading the clue.

Then, at that point, if you think you know the answer and are confident enough to ring in, you can depress your signaling device, and if you’re the first one to ring in, you get to try and answer the clue. So, it’s basically the time it takes the host to read the clue, that’s the amount of time Watson has to come up with its answer and its confidence, and decide whether or not it’s going to try and ring in and answer the clue

Q: Interesting. So is Watson actually working on it as the person is reading the clue?

Brown: Yes. As it receives the text of the clue at the very beginning, it immediately starts processing the clue and trying to come up with an answer. [The "answer" buttons are disabled until Alex Trebek finishes reading the entire question.]

Does Watson need a supercomputer?
Q: So that brings up a question: the IBM FAQ mentions the possibility of moving to Blue Gene. Are you going to stay with POWER7, or move to a supercomputer?

Brown: That is actually an old question in the FAQ; it probably should be updated, and I apologize for that. Earlier on in the project, we explored moving to Blue Gene, and it turns out that the computational model supported by Blue Gene is not ideal for what we need to do in Watson. Blue Gene works great for highly data-parallel processing problems, like doing protein folding or weather forecasting or doing these massive simulations, but the analytics that we run in Watson tend to be fairly expensive and heavyweight from a single-thread point of view, and actually run much better on a much more powerful processor.

If you look at the power of a single processor in a Blue Gene, at least the Blue Gene/P, those are something like an 850 MHz power processor, and that does not provide the single-threaded performance that most of the analytics that we run in Watson would require.

The POWER7, however, and in particular the Power 750 server that Watson is deployed on, does have that performance. In fact, it’s ideal as a workload-optimized system for Watson. Each server has 32 cores in it, and each one of these cores is a 3.55 GHz, full-blown POWER7 processor. And each server can have up to 256 GB of RAM, and our analytics — not only are they expensive to run, but they also use fairly large resources, and we typically want these loaded into main memory.

So what we can do with the core density of a Power 750 is run several copies of a particular analytic or collection of analytics that share some common big resources, and have that all loaded into this shared memory, which turns out to really be the ideal platform for running Watson. In our production system, I think we actually have a total of 90 of these Power 750 servers available to run Watson while it’s playing Jeopardy!

Can you expand on the single-threading model?

Brown: Well, when I say single threading, the only point I’m making there is that we have some fairly heavyweight analytics that run on a single thread, so they are not internally parallelized; they’re basically running as sequential processes. So in order for them to run fast enough, they need a fairly powerful processor. When Watson is actually processing one of these clues, we have many copies of these analytics running in parallel, distributed across essentially, a network of computers.

When you look at the processing of a clue, throughout the entire pipeline, at the beginning, we start with a single clue, we analyze the clue, and then we go through a candidate generation phase, which actually runs several different primary searches, which each produce on the order of 50 search results. Then, each search result can produce several candidate answers, and so by the time we’ve generated all of our candidate answers, we might have three to five hundred candidate answers for the clue.

Now, all of these candidate answers can be processed independently and in parallel, so now they fan out to answer-scoring analytics that are distributed across this cluster, and these answer-scoring analytics score the answers. Then, we run additional searches for the answers to gather more evidence, and then run deep analytics on each piece of evidence, so each candidate answer might go and generate 20 pieces of evidence to support that answer.

Now, all of this evidence can be analyzed independently and in parallel, so that fans out again. Now you have evidence being deeply analyzed on the cluster, and then all of these analytics produce scores that ultimately get merged together, using a machine-learning framework to weight the scores and produce a final ranked order for the candidate answers, as well as a final confidence in them. Then, that’s what comes out in the end.

So, if you kind of visualize a fan-out going from the beginning of the process to the end, and then kind of collapsing back at the end, to the final ranked answers, it’s this fan out and generation of all of these intermediate candidate answers and pieces of evidence that can be scored independently that allow us to leverage a big parallel-processing cluster.

Q: Why wouldn’t Blue Gene/P be capable of that?

Brown: The issue we ran into with the Blue Gene/P is that, to give you sort of a concrete but hypothetical example: let’s say we have an algorithm that, on a POWER7 processing core, takes 500 milliseconds to score all of the evidence for a particular candidate answer. Now, if we take that same algorithm and run it on a Blue Gene, on a single core of the Blue Gene, which is only 850 MHz, let’s say, it’s ten times slower. So, now it’s going to take five seconds, and that already exceeds the time budget for getting the entire clue answered, because on average you have about three seconds to come up with your answer and your confidence, and decide whether or not you’re going to ring in.

So what I mean by this single-threaded processing performance is that there are just certain analytics that the total time they take, if it exceeds a certain budget, is already too slow, and there’s no way to make it faster by doing things in parallel. Due to the sequential processing that particular analytic does, if the processor can’t do it fast enough, it’s going to be too slow. That’s the main reason why the POWER7 works much better with this particular application than Blue Gene.

It’s based on the power of a single core, but then also the ability to have a large number of cores in the whole cluster. Our entire POWER7 cluster that we’re running Watson on has 2,880 cores. One rack of Blue Gene/P actually has 4,096 cores, but these are 850 MHz cores, compared to the 3.55 GHz core on the POWER7.


_________________
La vie est un privilege, elle ne vous doit rien!
Vous lui devez tout, en l'occurence votre vie
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
M.O.P.
Super Posteur


Inscrit le: 11 Mar 2004
Messages: 2939

MessagePosté le: Lun 14 Fév 2011 23:48    Sujet du message: Répondre en citant

IBM-Faqs sur Watson et la platform DeepQA a la base de Watson
Citation:
The DeepQA ProjectA computer system that can directly and precisely answer natural language questions over an open and broad range of knowledge has been envisioned by scientists and writers since the advent of computers themselves. Consider, for example, the "Computer" in Star Trek. Taken to its ultimate form, broad and accurate open-domain question answering may well represent a crowning achievement for the field of Artificial Intelligence (AI).

While current computers can store and deliver a wealth of digital content created by humans, they are unable to operate over it in human terms. The quest for building a computer system that can do open-domain Question Answering is ultimately driven by a broader vision that sees computers operating more effectively in human terms rather than strictly computer terms. They should function in ways that understand complex information requirements, as people would express them, for example, in natural language questions or interactive dialogs. Computers should deliver precise, meaningful responses, and synthesize, integrate, and rapidly reason over the breadth of human knowledge as it is most rapidly and naturally produced -- in natural language text.

The possibilities for enriching our global community and accelerating the pace at which we can exploit and expand human knowledge, solve problems and help each other in ways never before imagined, rests on our ability to bring information technology out of the era of operating in computer terms and into the era of operating in human terms.

The DeepQA project at IBM shapes a grand challenge in Computer Science that aims to illustrate how the wide and growing accessibility of natural language content and the integration and advancement of Natural Language Processing, Information Retrieval, Machine Learning, Knowledge Representation and Reasoning, and massively parallel computation can drive open-domain automatic Question Answering technology to a point where it clearly and consistently rivals the best human performance.

A first stop along the way is the Jeopardy! Challenge...
.

_________________
La vie est un privilege, elle ne vous doit rien!
Vous lui devez tout, en l'occurence votre vie
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
Panafricain
Super Posteur


Inscrit le: 22 Fév 2004
Messages: 1124

MessagePosté le: Mar 15 Fév 2011 18:46    Sujet du message: Répondre en citant

http://www.time.com/time/specials/packages/article/0,28804,2049187_2049195,00.html?iid=moreontime


http://www.time.com/time/specials/packages/completelist/0,29569,2049187,00.html

Top 10 Man-vs.-Machine Moments

[i ]An IBM supercomputer took on Jeopardy! champions Ken Jennings and Brad Rutter in an epic battle of humans vs. artificial intelligence on Monday. But the Jeopardy! contest was only the latest in a long-running battle. TIME takes a look at other competitions that tested humans' abilities against machines[/i]

***Garry Kasparov vs. Deep Blue
By Alexandra Silver Tuesday, Feb. 15, 2011

In 1996, world chess champion Garry Kasparov took on an IBM RS/6000 SP known as Deep Blue. Kasparov called it "the monster" and, TIME reported, "spent much of the week grimacing and holding his head in frustration as he sat across the board from some stone-faced IBM scientist taking instructions from the computer." While Kasparov won the match, Deep Blue did win one of the games, marking the first time that a computer had bested a world champion under tournament conditions. "I could feel — I could smell — a new kind of intelligence across the table," Kasparov wrote in TIME. But, he concluded in that article, "Although I think I did see some signs of intelligence, it's a weird kind, an inefficient, inflexible kind that makes me think I have a few years left." He didn't. In 1997, an upgraded Deep Blue, one that could evaluate 200 million chess positions per second, won its rematch against Kasparov.
Read TIME's 1996 cover story "Can Machines Think?"

---

***Jeopardy! Pros vs. Watson
By Katy Steinmetz Tuesday, Feb. 15, 2011


Read more: http://www.time.com/time/specials/packages/article/0,28804,2049187_2049195_2049266,00.html #ixzz1E3DscoYH

Although one might suspect that the 70-year-old Alex Trebek — who performs his hosting duties with almost mechanical perfection and looks eternally 50 — is some sort of Canadian robot, Jeopardy!'s first official machine went on air this week. An IBM supercomputer, which goes by the name of Watson (after the IBM founder), is taking on the show's most epic champions in the nerdiest battle of the millennium. The kick is that this isn't just about a computer having answers stored up like as many flash cards; it's about the machine being able to understand human language and logically retrieve answers, Homo sapien–style. But that's not the only reason the computer is super: its hardware is reportedly the size of 10 refrigerators, and it can perform 80 trillion operations per second. (It's hard to know quite what that means, but it sure sounds impressive.) Ken Jennings, the longest-running champ, and Brad Rutter, the champion who took home the most green, didn't fare so well in a practice round last month. But with a $1 million prize and the dignity of the human race now on the line, one might note that practice rounds don't count.

Read more: http://www.time.com/time/specials/packages/article/0,28804,2049187_2049195_2049266,00.html #ixzz1E3DkHGWE
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
M.O.P.
Super Posteur


Inscrit le: 11 Mar 2004
Messages: 2939

MessagePosté le: Mer 16 Fév 2011 07:26    Sujet du message: Répondre en citant

Arrow Comment la voix et le visage de Watson furent crees


Watson gagne le premier match en 2 rencontres
Arrow 1ere rencontre du premier match video-1:



Arrow 1ere rencontre du premier match video-2:


Arrow 2eme rencontre du premier match video-1:


Arrow 2eme rencontre du premier match video-2:

_________________
La vie est un privilege, elle ne vous doit rien!
Vous lui devez tout, en l'occurence votre vie
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
M.O.P.
Super Posteur


Inscrit le: 11 Mar 2004
Messages: 2939

MessagePosté le: Sam 26 Fév 2011 07:59    Sujet du message: Répondre en citant

Realiser soi-meme son propre systeme d'acces a l'information, a la maison ou comme projet universitaire en afrique, avec du materiel informatique accessible de tous.

Arrow Building a personal version IBM Watson question answering system

It is possible to build your own Watson Jr. question-answering system, something less fancy, less sophisticated, scaled-down for personal use or business workgroup usage


Arrow Inside System Storage -- by Tony Pearson
IBM Watson - How to build your own "Watson Jr." in your basement

_________________
La vie est un privilege, elle ne vous doit rien!
Vous lui devez tout, en l'occurence votre vie
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
Abiola
Grioonaute régulier


Inscrit le: 13 Avr 2006
Messages: 340

MessagePosté le: Mer 02 Mar 2011 06:10    Sujet du message: Répondre en citant

Peter Norvig expliquait dans un "paper" sur le "machine learning" qu'aujourd'hui on avait pas forcément de meilleurs algorithmes mais plus de données. Des quantités folles de données, qui permettent à la machine d'apprendre. C'est comme ça que fonctionne Google Translate, qui fait maintenant une traduction vocale en temps réel.

Le dicton qui circule, c'est qu'une fois qu'une technologie entre dans l'usage courant, on ne la considère plus comme de l'intelligence artificielle Very Happy

JAMES F. ALLEN a écrit:

AI is not the science of building artificial people. It's not the science of understanding human intelligence. It's not even the science of trying to build artifacts that can imitate human behavior well enough to fool someone that the machine is human, as proposed in the famous Turing test ... AI is the science of making machines do tasks that humans can do or try to do ... you could argue ... that much of computer science and engineering is included in this definition.

_________________
Les Africains sont aujourd'hui à la croisée des chemins : c'est l'union ou la mort !
Africaines Africains, l'édification de la véritable union africaine est notre devoir et notre seule chance de salut sur cette terre.
Un vrai guerrier ne recule pas devant son devoir sous prétexte que la tâche est surhumaine, impossible...il se bat !
Revenir en haut de page
Voir le profil de l'utilisateur Envoyer un message privé
Montrer les messages depuis:   
Poster un nouveau sujet   Répondre au sujet       grioo.com Index du Forum -> Sciences & Technologies Toutes les heures sont au format GMT + 1 Heure
Page 1 sur 1

 
Sauter vers:  
Vous ne pouvez pas poster de nouveaux sujets dans ce forum
Vous ne pouvez pas répondre aux sujets dans ce forum
Vous ne pouvez pas éditer vos messages dans ce forum
Vous ne pouvez pas supprimer vos messages dans ce forum
Vous ne pouvez pas voter dans les sondages de ce forum



Powered by phpBB © 2001 phpBB Group