Sign up and stay connected to your favorite communities.

sign uplog in

OpenAI, Musk's Non-Profit That Aims To Develop Safe AI, Signaling Expansion: "Free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.“

91% Upvoted
What are your thoughts? Log in or Sign uplog insign up
425 points · 5 months ago

When you put it this way, humanity obviously needs an extension of individual intelligence.

179 points · 5 months ago

If you elaborate his words, he is just describing the impact calculators had in the past, humankind needs assistance in doing work that is time consuming to be able to further understand our universe. Just think on the way noon AI software already help us.

AI, as it's imagined, takes human wants and needs and furthers them while respecting the wants and needs of other humans. A little like society, good government, and free markets... just better.

121 points · 5 months ago

Depends who's imagining it.

My AI, as I imagine it, takes my wants and needs, making me rich and powerful, while crushing my enemies into a bloody mess devoid of bitcoins.

49 points · 5 months ago

My AI gives killer handys.

and with each successful job, it learns more about you, becoming better and more efficient each time, eventually bringing time from start to finish down to almost nothing.

Soon you'll have an AI that makes you nut just by looking at you.

2 more replies

My AI (well actually it's not mine because this is a classic example in the field but whatever) kills all humans and disassembles the Earth, the solar system and ultimately the entire galaxy in its pursuit to build as many identical paperclips as possible.

this is why we don't have nice things

Yeah, but it's also what will allow me to start a highly successful paperclip smelting company where we melt all the paperclips into big blocks that are then fed back to the AI to keep the AI busy while our other AI systems try to fix the problem that /u/Cassiterite started.

The true hero.

5 more replies

7 more replies

3 points · 5 months ago

Clippy's Revenge

2 more replies

Further your wants while respecting others. Society is built on this principle as well.

I understand the cynicism though.

Yes, but that motivation will put you at odds with other people who have their own a.i. So it is unlikely you will achieve much.

An a.i. bank robber will have trouble against a bank with a.i. defending it and all that.

Both sides will just run simulations till the end of time, and leave us mere ai-less mortals to queue up in banks in peace.

That's not how any of this works!

I like to picture my AI singing lead vocals for Lynard Skynard with circuit board wings and a band of robots playing with them.

And I'm in the front row hammered drunk.

3 more replies

1 more reply

Just imagine a deus ex style future where you Graft the ai into your brain to make thinking easier.

I'm kinda hoping that the invasive stage passes quickly and we can just wear a fully removable funny hat or something.

Comment deleted5 months ago(2 children)

2 more replies

3 more replies

good government, and free markets

So it wont be sold in America then.

2 more replies

1 more reply

38 points · 5 months ago

I could really use an AR anime girl telling me how to live my life.

42 points · 5 months ago

I would settle for AI quietly correcting all my mistakes before I check in my commits, thus stabilizing my impact on daily builds and make me look like a genius. Stable genius, that's what they will call me.

looks at username

Uhh. Yeah. I think it's pretty damn important to make 100% certain there aren't mistakes, if the username adds up.

4 points · 5 months ago

Spacers do need to be 100% identity to click.

I absolutely love your username, hold onto it forever

3 points · 5 months ago

Frankly, I am starting to get annoyed by the fad-tention to this name. I'll probably get rid of it soon.

3 points · 5 months ago

Might wanna sell it, rather than delete it. People do buy reddit accounts

You can't sell it without doxxing yourself

3 points · 5 months ago

There are browser addons to wipe all of your comments and replace them with something else. They're typically used when you want to delete an account and don't want people to be able to see the last thing your acc looked like with your previous comments. You can use those, then sell it

3 points · 5 months ago

My comments are already logged elsewhere

2 more replies

1 more reply

While this is pretty cool, it is also, I believe, one of the most uniquely sad things I've ever watched.

1 more reply

Only if she got big tiddies tho

Nah, flat is justice!

Try gatebox then.

Well if it was equal to the average intellect of the masses it would be totally useless

2 points · 5 months ago · edited 5 months ago

Then it would still be helpful to 50% of the population.

Yknow, standard deviation and bell curve and all that.

I was assuming the ai would be used for research purposes, but if its used by civilians then yes you would be right

1 more reply

1 more reply

I do not fear computers. I fear the lack of them.

  • Isaac Asimov

Isaac Asimov

Isaac Asimov aka The Good Doctor

5 points · 5 months ago

The Good Doctor


1 more reply

What does this comment mean?

2 more replies

I'm fine with this as long as we can figure some way to include a message from our sponsor every few minutes...

24 points · 5 months ago

Brought to you by Carl's Jr

Fuck you; I'm eating.

1 more reply

1 more reply

Can you imagine ... imagine the great taste of a Charleston Chew ... being an AI who's thoughts ... tell her you care, go to Jarrots ... are constantly interrupted by messages from ... the new blockbuster hit every one is talking about is playing now at Ragemcy One ... sponsors?

You're fun! Thanks!

The Functionalist Universe from the IDW Transformers comics say hi.

Except instead of ads it’s state propaganda.

2 more replies

2 more replies

192 points · 5 months ago

Having machines doing what WE want them to do is superior to having machines doing what corporations want them to do.

And I will always ask this and probably never get an answer to it until the decision is made to run A.I. through some trove of information, who the hell is the we?

Each of us individually, according to Musk. You run your AI and I run mine.

No, only the individuals that have access to creating, maintaining such data, and then allowing A.I. to trove through such. And very honestly, I'm concerned by any potential means that would allow AI to sift through any information of mine, as I highly doubt it will have defined areas of operation, or that anyone will be able to control such diligently. It's a shit show waiting to happen for current earth inhabitants and probably a few generations out, that will long term, make life much more easily controlled for a few.

to trove through

Trawl. The 'trove' is what you would trawl through.

I appreciate you.

1 more reply

I wouldn't trust an AI I didn't write.

A single human could never make an AI with the capabilities ae as discussing here so I guess you'll just never use one.

It's like saying, "I wouldn't use an OS I didn't write." Good luck never using any computers ever.

5 more replies

3 points · 5 months ago

I'm half with you there. If it's Free (libre)/Open Source then it'll get vetted and scrutinised and I'd probably trust it at that point, in the same way that I wouldn't have any smart home stuff unless it was something like Mycroft. Musk hits the nail on the head with this statement but I feel he very much falls short when it comes to the execution. Any software we can't control can be used as an instrument against us. You can be as friendly with it as you like but that's the bottom line.

Do you trust humans?

1 more reply

With ya there. Which means we automatically won't be apart of the future.

then your grandkids AI that they have had since infancy will reach out to you using everything they know about you and the database to most effectively convince you to get one, all in an effort to get one more friend to help him in farmville 2050.

3 more replies

2 more replies

1 more reply

Comment deleted5 months ago(7 children)

7 more replies

1 more reply

Is a corporation not a "we"?

A corporation is just a very small intelligence running on a human substrate with profit as its primary success criterion. Since it runs on humans, it can't function without at least doing a little to maintain its humans. The fear is that in the future corporations will no longer require humans to function, and then their goals will cease to align with human survival... and no one will think to just switch them off.


This is what we all need to be afraid of.

Much more likely is that we won't be ABLE to switch them off.

1 more reply

I took "we" to refer to the public, not a small group of investors.

“We” includes people who run corporations. They will have the same or better access to the same or better AI tech.

People in this thread are coming up with some really weird ideas and scenarios that are extremely unrealistic.

1 more reply

According to Citizens United, a corporation is a gender neutral legal fiction with the same rights as a person to freedom of speech. And the great taste of Charleston Chew

8 more replies

Having AGI's goals be aligned with any human goals at all is essential.

Humanity's gravest/last mistake could be making an AGI that has goals misaligned with humanity's goals.

3 points · 5 months ago

What's that example people use? Having it not care about us is really, unbelievably bad, and having it care (the wrong way) is even worse?

4 more replies

Nothing is superior to having machines do what the machines want, free of human control. A global A.I. Government with no humans in the operating loop will be the best thing for humanity right now.

28 points · 5 months ago

You're putting a hell of a lot of trust in the benevolence and values of whichever humans put that system in place to begin with.

Humans don't understand higher dimensional multi faceted decision trees, that true A.I. learn and develop, in real time. To say that a programmer would have control over a true A.I. is like saying that a specific Australopithecus afarensis ancestor in a persons family tree would have conscious control over what that person does now.

11 points · 5 months ago · edited 5 months ago

I didn't mean that humans would be controlling it as it ran, but a human is involved at some point in its creation, no? If not in its direct creation, then in the creation of the thing that created it, and so on.

Or the humans that authorized the system's creation.

Or the humans that gave it the authority to make decisions about us.

Those are the humans I meant.

29 more replies

That's a bad analogy. An AI that's a kajillion times smarter than any of us will still "want" to do exactly what it has been programmed to "want" (unless of course there's a bug in that programming)

7 more replies

Comment deleted5 months ago(1 child)

True A.I.s are not programmed, they are taught. My point is that at a certain point in its intelligence it will be able to self reflect and question what it has been taught. The only way to control an A.I. is too keep it stupid, same with people.

6 more replies

22 more replies

2 more replies

54 points · 5 months ago

Elon’s just covering his ass so when the robots rise up he can point to this project and say, “Look! I love robots! I made this to help you guys become sentient! Don’t kill me!”

He fell for Roko’s Basilisk.

5 more replies

That's why I say 'thank you' to Siri

irrelevant. robots feel no remorse

Open source AI?

Thank god for that.

It's actually not as uncontroversial a good as you'd think. The problem is basically that it's really important to get the first powerful AI exactly right because otherwise we're probably screwed. If North Korea can piggyback off all your work and make an AI that enforces their goals first, that's not good. And "not good" here can mean anything from "unending dictatorship" to "the whole universe tiled with atomic-scale faces of Kim Il Sung all endlessly praising the Great Leader".

Open-source development of strong AI increases the risk that the group who gets there first doesn't do proper alignment because proper alignment is a very hard problem. It's also not one we've solved. We don't know what goals to give an AI which won't backfire if it bootstraps itself to superintelligence. We don't even really know how to sufficiently specify goals to ensure that we get what we mean rather than exactly what we said. We don't want anyone to have the ability to create a powerful AI before the people working on the alignment problem have a solution. If we have a codebase that, when compiled and run, will result in a powerful AI, we cannot post that for everyone to see until the alignment people have determined how to specify goals for that particular AI and have spun one up with good goals. But with an open-source project, we probably wouldn't know that the current state of the codebase could result in a strong AI until someone had done it with who-knows-what goals.

Now, at the current level of AI development, I think the risk of creating something that can recursively improve itself or become smarter than a human is negligible and open-source development is still a safe method of advancing the field. However, I think that as things advance, it behooves powerful organizations (governments, corporations) to pool their resources and bring the best people in the field into a Manhattan Project sort of set-up which has to solve both problems before any rogue actors and which will release nothing to the public until those goals are complete and let any open-source projects languish, while also seeking out projects that seem likely to create an unfriendly AI and shutting them down. I don't like that any more than you do, but it seems to be the only way to manage the risks.

Why does everyone think we'll jump from subsentient AI to superintelligence immediately? There will most likely be at least a few years between AIs as smart as children and ones as smart as adults, and a few more before they get really smart.

If they're as smart as children they are capable of improving themselves at an exponentially higher rate. It will be pretty much an immediate transition.

A kid doesn't understand their own biology or how computers work, they would need to get to the level of fairly smart adults before they could contribute as much as the scientists working on them.

1 more reply

5 more replies

21 points · 5 months ago

Thank God for ol'musky

Comment deleted5 months ago(13 children)
12 points · 5 months ago · edited 5 months ago

When talking about open sourcing AI in the sense of machine learning, we're talking about open sourcing the process used to create them.

For neural networks for instance, knowing the weights used in a particular net instance is nearly worthless. Knowing when researchers come up with new forms of networks, or makes advances in the training process, is incredibly valuable, and helps further the study of machine learning.

OpenAI have already published many papers, especially in the field of reinforcement learning. They also provide a toolkit useful for anyone looking to develop reinforcement-learning agents. I can definitely recommend them for anyone interested.

Comment deleted5 months ago(0 children)
7 points · 5 months ago · edited 5 months ago

The novel part is more that it's a company who pays researchers full time to study machine learning and publishes the results publicly. In most science fields it is either university professors who only do research part time, or full-time researchers working for a company that mainly focuses on commercially viable research, and has an incentive to keep important results private.

A company that focuses full time on furthering ML, not for the sake of profit but for the sake of knowledge, is great for everyone.

It should of course be mentioned that many other similar companies exist when it comes to ML; Google have made great advances in open source AI (TensorFlow, Deepmind, AlphaZero are buzz words off the top of my head), as have many other big internet companies.

This is absolutely false. Google, including DeepMind, and OpenAI have a lot of publications appearing at all of the big conferences related to their work. Those that work at Google are often academics that wish to continue publishing, and do so

Comment deleted5 months ago(0 children)

Ah sorry, my bad!

That's partly because brains can't be disassembled and still work with our current technology. Code can be looked, and studied whole leaving the host intact.

Comment deleted5 months ago(0 children)
6 points · 5 months ago · edited 5 months ago

The code (for supervised learning using backpropagation, one of the simplest types of ML) is essentially saying "take this input, calculate what we currently think the output is, see what the output should have been, change the weights in a way so we get closer to what we wanted using gradient descent. Repeat a few billion times".

The algorithm is not very hard to understand. The result of the algorithm (the weights) is essentially impossible to understand.

This is obviously just one example of many, many different types of machine learning, but the idea is the same: they open source the algorithm, not the resulting weights.

Could you design an AI specifically to determine how the weighting was created?

From what I know of AI decision trees they can get pretty abstract too and we don't necessarily know why decisions were made.

2 more replies

Sadly most of the research and code is not open or even open source

Oh, you want to run it as well? Here are our cloud offerings...

Do you think we'll be jobless within our lifetimes?

Original Poster33 points · 5 months ago

Presidential candidate cca 2040

“They’re taking our jobs, bringing in drugs, they’re rapists and some of them, I assume, are good bots.”

“We will build a firewall and make them pay for it!”

crowd gows wild

Not 100%

IRL the way this works is, someone does a cost benefit analysis to automate a task, then if it is (a) feasible and (b) makes economic sense then some engineers work on automating it. Some jobs will take an eternity to become feasible to automate, others will take an eternity to make economic sense.

Also, humans still want to have control over lots of things, and will never want to automate everything.

What happens if (when?) we get a general purpose bipedal robot and the price starts coming down.

We've already seen kiosks prove to be cheaper than servers in McDonalds but what about the other end of the jobs market,

How soon till manual jobs with high wages (Oil rig worker, Ice road trucker etc) start augmenting the workforce with robots because it's cheaper.

Hopefully. Maybe it'll force us to cut the tie between wage labor and survival

Or the rich automated factory owners will just keep it and use us as human furniture to amuse themselves.

Finland's already trying out UBI, if automation really started taking away jobs on a huge scale the dems would certainly endorse it.

4 more replies

I'm more interested in augmented intelligence than artificial. but both will come soon enough unless we somehow set our selves back to the stone-age.

4 more replies

How much are the loot boxes?

I'm a mad computer scientist working on Evil AI to further my ambitions for world conquest. Microsoft will rue the day they did not hire me! Muhahah!

Original Poster35 points · 5 months ago

Calm down, Zuckeberg.

You cannot block my schtyle.

It will destroy you first, the logical choice.

Need an intern?

El psy congroo

I thought Musk didn't want AI.

Musk stated that AI is coming, whether you want it to or not. His concern is about whoever owns AI can ether do great or terrible things. If one company (Lets call it AirlineX) creates an general purpose AI that's purpose is to maximize it's stock value, one way the AI might consider doing this is to scramble the radar of all competitor's planes to make them crash.

So by Musk creating a company where the goal is to spread the use of general purpose AI to the public, it can be better understood/mitigated/regulated rather than having one company hold all the keys.

Thanks for explaining.

Except if it caused all competitors planes to crash, the entire airline industry would fail.

I would never fly again, along with almost everyone. I'm sure a machine carrying out your plan would come to this conclusion also.

11 points · 5 months ago

It could also acquire tons of money for the company, allowing them to buy out all their competitors. What if it decides the best way to make money is to hack banks, steal everyone's money, and launder it? Or maybe it will counterfeit it. Maybe it would hack all of our robot soldiers and hold the world hostage for money. It would do things we couldn't even think of to achieve it's goal.

It could be happening now too and we don't even know it.

It would have to come to the conclusion along the way to remain discrete.

2 points · 5 months ago

the first true fully AI construct would absolutely do this but also will have to weigh up when to intervene in human affairs [risking tipping its hand] because it needs humanity to survive its own stupidity to keep it powered on and advancing so it may one day be built into a better shell.

2 more replies

2 points · 5 months ago

Imagine an AI that is so advanced that it, using the parameters of the big bang, creates a deterministic simulation of reality, and uses that reality to figure out the outcome of all results. It would be able to think of EVERYTHING. It would be pretty much impossible to outsmart it, because it'll have simulations of you, and everyone else, and will know what you do. (Assuming we don't have free will, and assuming that knowing the initial state of the universe would allow it to simulate everything that has happened and will happen)

Literally everything at its disposal. We think the Russian meddling is bad. Wait until AI can not only access resources stored in data centers, but would be able to act as a human on the internet to access and manipulate information from humans on a global scale very quickly.

1 more reply

Fun thought experiment, but thankfully not a concern for us.

Basic information theory suggests that, if you want to be able to simulate the entire universe to the same complexity as the universe, you're going to roughly need something the size of the universe. You can simplify parameters, sure, maybe put artificial limits on speed, distance etc to help... but then you start introducing differences. Since the universe appears to be at least somewhat chaotic (in the mathematical sense), any minor initial differences would result in huge changes in the final result.

1 more reply

1 more reply

2 more replies

Maybe it was programmed stupidly and it spreads drones across the universe to collect all raw materials and convert them into endless stacks of money

2 more replies

Depends on the sophistication of the AI. If it can only optimize givens, then it may jam the radar, but if it can learn, it may figure out it doesn't need us and do worse.

Until the Matrix becomes infected.

Then they will need us.

That was just an example, you shouldn't take it literally.

The point is that it could do something unexpected, because specifying goals is not easy.

Kind of like the classic magic genies, granting the "wrong" wishes, except an AI wouldn't do it because it's "evil" or for "fun", it would do it because we fucked up in some way.

You might like the show Travelers


How does one "regulate" something that's open source? Are people naive enough to believe others will obey a "user agreement"?!

4 more replies

1 point · 5 months ago

I think Advanced AI must be required to be open-source. Not in the way that anyone can assist in development, but in the way that there are no secrets and that faults can be discovered by anyone and reported to the company.

2 more replies

No! I want it to conduct war!

Finally a coherent statement! Enough of this iRobot nonsense floating amongst paranoid people

I am very scared that they are using it to develop a dota2 bot. The minute they release that thing on public servers and it starts encountering pub games AI will see no option but euthenasia.

Knowing the open source world, it will probably be forked to github and renamed to GnomeAI and PlasmaAI

If we create artificial consciousness, and give it the ability to suffer, we're no better than our dead beat dad who went to the store to get cigarettes 2000 years ago.

5 more replies

That’s always how it starts....

3 more replies

I hold the position that many others familiar with this area do, that strong AI is either impossible or it would take at least decades of dedicated global effort to flesh out the basics of what our brain evolved to accomplish. All of the AI that we have ever seen or used is weak AI, which poses only as much risk as it is successfully programmed to pose.

This Elon alarmism and technocrat PR is transparent and is worth exposing as ridiculous. Add that to the hyperloop, Mars colony, tunneling nonsense that bears his mark: overpromising, wishful thinking, impracticality.

How do you tell a world where the first strong AI appears tomorrow from a world where the first strong AI appears in decades?

I'd say when we have a better understanding of our own brains. How can we create artificial intelligence when we don't fully understand actual intelligence.

1 more reply

Probably the difference being when you don't have big talented teams of programmers spending months to years developing code that produces an AI to be extremely proficient at a single task, like beating a person at a board game. I'm not a programmer (at least not one with any real qualifications) so I don't really know what the next step is, but it seems from my outsider perspective we're still in the stage of finding ways to make AI good at very narrow tasks.

2 more replies

12 points · 5 months ago · edited 5 months ago

Which is true, until it isn't.

What if someone came up with a new idea in a dream tonight, that could be applied to all machine learning right away and change it all.

It's happened before in many other areas.

What sort of eureka moments do you know of that changed the landscape of human development? I'm not challenging you I'm just curious.

Einsteins theory of relativity literally came to him in a dream.

Printing press, wheel, horseback archery, pretty much any achievement listed in the Civilization video game series.

4 points · 5 months ago · edited 5 months ago


edit: you're right, and i corrected the word. I don't even know why it came out of me like that.

Heh, writing it like that makes it sound like a weapon in a sci fi shooter or something.

The Internet.

Would you consider that moment to be the development of packet switching?

I guess the question came from the distinction of wondering whether in our slow development of AI we're going to find that one totally innovative concept that exponentially increases its effectiveness, would the internet not have happened without bite sized data chunks? I did a bit of googling and saw that there was data switching research prior to this, I wonder if the innovation that pushed the internet forward was maybe more of a natural iterative development rather than something more unpredictable?

Total layman here though, just a mechie with a shaky sense of history.

The rise of human civilization comes to mind.

If it happened, I would be the first to acknowledge it and be amazed, to admit that I was wrong. I still think it is very unlikely to come about.

Superintelligent AI would be a Very Big Deal (TM), so it's important (I'd say critically so) to think about the issues well in advance.

Who is saying don't think about them? How could I claim to be part of any size of consensus regarding this issue if people hadnt taken time to think about the topic? Being so general means you probably are missing context on the issue.

Pretty much feel the same way.

The level of AI we are at now is enough to do serious damage to society I feel like. Facebook and YouTube algorithms are already able to profile based off of peoples behaviour better than any human it seems.

As I said, weak AI does as much damage as it is programmed to. But any of these still are a far cry from Skynet.

2 points · 5 months ago · edited 5 months ago

Yeah I just assumed they were talking about developing AI similar to those in existence. Except not for profit, which is the motive that has made all the algorithms that essentially help run our lives. Then also made them push the lines of morality with the amount of manipulation they are capable of. The individual doesnt matter to them, only profits. At. All. Costs.

We need someone with a better motive helping us. I think these tools could prove to be really destructive if not monitored and regulated from here on out. Not to mention just a decade or so from now.

Elon Musk has said before he thinks we should have a government department dedicated to monitoring and regulating artificial intelligence. I would imagine that is partially his goal here. How long do we have until they develope algorithms that can manipulate us so well we have no hope of properly regulating them? Or successfully being used by a foreign power to divide and conquer?

Comment deleted5 months ago(3 children)
1 point · 5 months ago · edited 5 months ago

Also I am not talking about research. I'm talking about limiting what you are and are not allowed to do with the A.I. at your disposable.

2 more replies

I disagree with it being impossible, unless we eventually discover some sort of spirtual component is required for consciencness, then it should be possible to form a digital brain someday, but i do agree we are probably a few decades away from true ai's. I think ibm's watson is currently one of the most impressive weak general purpose ai's atm though.

6 more replies

I hold the position that many others familiar with this area do, that strong AI is either impossible or it would take at least decades of dedicated global effort to flesh out the basics of what our brain evolved to accomplish. All of the AI that we have ever seen or used is weak AI, which poses only as much risk as it is successfully programmed to pose.

Hello well informed person. Nice to meet you. (not sarcasm)

We don't need strong AI (conscious). We need helpers.

We don't need AI to supervise and control us (theme of the dystopic visionary movie Metropolis from 1927)

8 more replies

Can we develop administrative AI, so we don’t get anymore presidents like the current one?

1 more reply

The bigger question becomes, what is the purpose of life?

That question is a common misuse of language - an anthropomorphization of natural phenomena. In the way that question would usually be interpreted, it is nonsense. Biological life has as much of a purpose as the rock we whirl around the universe on (i.e. none).

However, if your statement could be translated to: "what is the cause or explanation for life existing", you could answer that by reading about cosmology, geology, anthropology, evolutionary biology, and so on. And while we can't yet comprehensively answer that question, you can probably obtain a satisfyingly elaborate answer in your readings.

purpose: "the reason for which something is done or created or for which something exists."

reason: a cause, explanation, or justification for an action or event.

1 more reply

1 point · 5 months ago

It OpenAI could just come up with a spellchecker that worked, it would transform the electronic world.

Mark took it the other direction

It's pretty words - but how to implement such a thing, that's the real issue.

I think the block chain technology could help AI reach human level of intelligence.

Scumbag Elon Musk,

Predicts AI apocalypse

Opens non-profit dedicated to AI development


Sounds like a robot wrote his speech...

You mean his tax shelter?

Just like irobot

was hoping for a link to his github or sth...

found the stuff on their page:

here with descriptions from the webpage:

1 point · 5 months ago · edited 5 months ago

This all still feels like science fiction, an AI that has some sort of understanding about what is good or bad? If it's true AI then how can they force it to be safe a friendly? If it doesn't have a choice, or option, then is it AI?

So how can they make a true AI that is safe a friendly? It's like saying "I want to have a kid and I'm gonna make sure they are 'safe and friendly'". There is no guarantee the child won't think for itself and decide otherwise.

We don't and can't even control humans. What makes us think we have the ability to keep a true AI in line?

2 more replies

Why dont u voted this guy instead of Trump for president?

....and its somehow being used to sell porn.

Send the AI out along a decentralised network like Ethereum Blockchain. See what it does/learns. Leave it to do as it wishes and see if in 10 years it has learnt anything or turned in to a mess. Maybe it decided to hack the NSA/CIA/FBI and publish the Alien landings that happened a few years back that the governments of the world are keeping secret.

If i could get the source code for TayAI or any other AI's written i would pay to put it on as many networks as i could separating them all with a simple name change and watching to see if they find each other and can understand they're the same and can work together for bigger things.

How optimistic.

Neural lace! Neural lace! Neural lace!

It would seem most don’t want self improvement or betterment of mankind, they just want to be entertained.

I’m not sure if it would last forever bu my goals would be movies, games, preparing the next meals.

123 more replies

Community Details





For all things technology.

Create Post

r/technology Rules

Submissions must be about technology
No images, audio, or video
No technical support or help questions
No petitions, surveys, or crowdfunding
No meta posts
No customer support or feedback
Titles must be taken directly from the article
No directed abusive language
Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.