New Kitt vs. Old (hypothetical discussion)
Posted: Sun Feb 24, 2008 1:49 pm
Hello all,
I am a avid fan of knight rider and have a BAS in cognitive computing and Artificial Intelligence so i figure i would mention a few things that COULD in theory, be contributing factors to the AI difference's
1.) Human interaction growth rate:
This tested method (all be it not on the scale of a "KITT" A.I.) Suggests that when a A.I is first implemented or "fathered" It will tend to be cold and calculating. Remember the goal of artificial intelligence is for it to grow and evolve into its own entity or personality. The original KITT had been worked on by several people which is implied in "knight of the phoenix" However The new KITT only had Its Creator to interact with. This persons personality was your a-typical scientist introvert. Were as for example "bonnie" was more bubbly and fun loving.
2.) The Singularity Effect:
Most of this information will be verbatim my text book and other sources
The technological singularity is a hypothesized point in the future variously characterized by the technological creation of self-improving intelligence, unprecedentedly rapid technological progress, or some combination of the two.
Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unseen by their designers, and thus recursively augment themselves into far greater intelligences.
Potential dangers
Some speculate superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests AIs may simply eliminate the human race, and humans would be powerless to stop them. Other oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics.
Some AI researchers have made efforts to diminish what they view as potential dangers associated with the singularity. The Singularity Institute for Artificial Intelligence is a nonprofit research institute for the study and advancement of Friendly Artificial Intelligence, a method proposed by SIAI research fellow Eliezer Yudkowsky for ensuring the stability and safety of AIs that experience Good's "intelligence explosion". AI researcher Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines.
I know your asking "How the heck does that fit in with knight rider?" If you look back at the old series. The original KITT was approaching its singularity point and at times, Was able to understand and do things That sometimes surpassed His human companions understanding. In Short : Give the new KITT time to evolve
I am a avid fan of knight rider and have a BAS in cognitive computing and Artificial Intelligence so i figure i would mention a few things that COULD in theory, be contributing factors to the AI difference's
1.) Human interaction growth rate:
This tested method (all be it not on the scale of a "KITT" A.I.) Suggests that when a A.I is first implemented or "fathered" It will tend to be cold and calculating. Remember the goal of artificial intelligence is for it to grow and evolve into its own entity or personality. The original KITT had been worked on by several people which is implied in "knight of the phoenix" However The new KITT only had Its Creator to interact with. This persons personality was your a-typical scientist introvert. Were as for example "bonnie" was more bubbly and fun loving.
2.) The Singularity Effect:
Most of this information will be verbatim my text book and other sources
The technological singularity is a hypothesized point in the future variously characterized by the technological creation of self-improving intelligence, unprecedentedly rapid technological progress, or some combination of the two.
Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unseen by their designers, and thus recursively augment themselves into far greater intelligences.
Potential dangers
Some speculate superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests AIs may simply eliminate the human race, and humans would be powerless to stop them. Other oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics.
Some AI researchers have made efforts to diminish what they view as potential dangers associated with the singularity. The Singularity Institute for Artificial Intelligence is a nonprofit research institute for the study and advancement of Friendly Artificial Intelligence, a method proposed by SIAI research fellow Eliezer Yudkowsky for ensuring the stability and safety of AIs that experience Good's "intelligence explosion". AI researcher Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines.
I know your asking "How the heck does that fit in with knight rider?" If you look back at the old series. The original KITT was approaching its singularity point and at times, Was able to understand and do things That sometimes surpassed His human companions understanding. In Short : Give the new KITT time to evolve