Never tried this. Thanks for the interesting image.Nope, we're officially here.
View attachment 276801
you can add extra words to set a scene, and the type of rendering (sketch, rendering, photorealistic). Faces are crappy but that kind of adds to the fun. Sometimes you get photorealistic without the keyword.
No, this version limited to what ir can draw in 2 minutes and human faces are intentionally obscured.Thank goodness that machine learning model image crap is not AI anymore than self driving cars are AI. It's just pattern recognition on crack starting from a randomized pixel canvas that changes pixels to create a image to match the text..
No, this version limited to what ir can draw in 2 minutes and human faces are intentionally obscured.
there is a better version that can creat some crazy stuff. Look up Dall-E. This is an AI research tool and images can take hours to generate.
Google also sells a service to create images that is supposedly the best on the market. A monthly subscription or project-based fee is charged by Google. Google staff has to review every request and output to make sure the Deep Fake concept is not used with celebrities or Politicians or Trademarked materials and existing company logos.
i know if one Fortune 500 company using the Google AI image service to suggest new logo designs for their company. Nothing was perfect but they are using the suggestions as a basis for final designs.
I think it's a cool computing toy but nothing close to actual intelligence.Yet Jason Scott, an archivist at the Internet Archive, prolific explorer of AI art programs, and traditional artist himself, says he is “no more scared of this than I am of the fill tool”—a reference to the feature in computer paint programs that allows a user to flood a space with color or patterns. In a conversation at The Atlantic Festival with Adrienne LaFrance, The Atlantic’s executive editor, Scott discussed his quest to understand how these programs “see.” He called them “toys” and “parlor game
Define smarter.
I wish I had the guts to park on a ridge in the middle of the road in SF, blast some tunes and enjoy the view like "Jasper" did.
View attachment 276921
I hope this:I think it's a cool computing toy but nothing close to actual intelligence.
Was taken no more seriously than this:Nope, we're officially here.
It was a joke.it's not guts, it's a safe-mode routine programmed in the stupid cars to shutdown (sometimes blocking traffic) when an error happens.
You passed the test.It was a joke.
" If a chicken-and-a-half can lay an egg-and-a-half in a day-and-a-half, just how many one-legged grasshoppers does it take to kick all the seeds out of a dill pickle? "
First, humans have been underestimated. It turns out that we (well, many of us) are really amazing at what we do, and for the foreseeable future we are likely to prove indispensable across a range of industries, especially column-writing. Computers, meanwhile, have been overestimated. Though machines can look indomitable in demonstrations, in the real world AI has turned out to be a poorer replacement for humans than its boosters have prophesied.
What’s more, the entire project of pitting AI against people is beginning to look pretty silly, because the likeliest outcome is what has pretty much always happened when humans acquire new technologies — the technology augments our capabilities rather than replaces us. Is “this time different,” as many Cassandras took to warning over the past few years? It’s looking like not.
How about fast-food workers, who were said to be replaceable by robotic food-prep machines and self-ordering kiosks? They’re safe too, Chris Kempczinski, the CEO of McDonald’s, said in an earnings call this summer. Even with a shortage of fast-food workers, robots “may be great for garnering headlines” but are simply “not practical for the vast majority of restaurants,” he said.
It’s possible, even likely, that all of these systems will improve. But there’s no evidence it will happen overnight, or quickly enough to result in catastrophic job losses in the short term.
“I don’t want to minimize the pain and adjustment costs for people who are impacted by technological change,” Handel told me. “But when you look at it, you just don’t see a lot — you just don’t see anything as much as being claimed.”
The courier company FedEx is abandoning a project to develop last-mile delivery robots. In 2019, FedEx partnered with New Hampshire-based DEKA Research and Development Corp, founded by Segway inventor Dean Kamen, to develop a wheeled robot called Roxo for last-mile deliveries.
But FedEx decided to end the project in early October, according to a report in Robotics 24/7. FedEx employees were told of the decision via an email from the company's chief transformation officer, Sriram Krishnasamy, who explained a new corporate strategy called "DRIVE."
"Although robotics and automation are key pillars of our innovation strategy, Roxo did not meet necessary near-term value requirements for DRIVE. Although we are ending the research and development efforts, Roxo served a valuable purpose: to rapidly advance our understanding and use of robotic technology," Krishnasamy wrote.
The researchers say that their findings suggest that more caution is warranted when interpreting neural network models of the brain.
“When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing,” Fiete says.
Kenneth Harris, a professor of quantitative neuroscience at University College London, says he hopes the new study will encourage neuroscientists to be more careful when stating what can be shown by analogies between neural networks and the brain.
“Neural networks can be a useful source of predictions. If you want to learn how the brain solves a computation, you can train a network to perform it, then test the hypothesis that the brain works the same way. Whether the hypothesis is confirmed or not, you will learn something,” says Harris, who was not involved in the study. “This paper shows that ‘postdiction’ is less powerful: Neural networks have many parameters, so getting them to replicate an existing result is not as surprising.”
“From 2014 to 2021, Kite was a start-up using AI to help developers write code. We have stopped working on Kite and are no longer supporting the Kite software,” Smith wrote.
“Thank you to everyone who used our product and thank you to our team members and investors who made this journey possible.”
What happened?
According to him, even state-of-the-art machine learning models today don’t understand the structure of code – and too few developers are willing to pay for available services.
“We failed to deliver our vision of AI-assisted programming because we were 10-plus years too early to market, ie, the tech is not ready yet,” Smith explained.
However, Smith said that the inadequacy of machine learning models in understanding the structure of code, such as non-local context, has been an insurmountable challenge for the Kite team.
“We made some progress towards better models for code, but the problem is very engineering intensive. It may cost over $100m to build a production-quality tool capable of synthesising code reliably, and nobody has tried that quite yet.”
Looking ahead, what Smith really wants to achieve is what he calls “fully automated programming.” “It’s that Star Trek vision of where you tell computers in a high-level language what to do,” he said. “If it’s ambiguous, the computer will ask questions.”