facebook google twitter tumblr instagram linkedin

Search This Blog

Powered by Blogger.
  • Home
  • About Me!!!
  • Privacy Policy

It's A "Holly Jolly" Artificial Intelligence Enabled Special Christmas

Did you know there’s a bit of artificial intelligence (AI) magic behind the scenes helping to make your holiday dreams come true? Santa’s ...

  • Home
  • Travel
  • Life Style
    • Category
    • Category
    • Category
  • About
  • Contact
  • Download

Shout Future

Educational blog about Data Science, Business Analytics and Artificial Intelligence.

Did you know there’s a bit of artificial intelligence (AI) magic behind the scenes helping to make your holiday dreams come true? Santa’s little helpers have gone high-tech this year. From finding the perfect gift to AI-enabled toys and even composing a holiday song—that truthfully will take a bit more refinement before it becomes a holiday classic—AI has infiltrated our holidays from start to finish.

Adobe Stock
Adobe Stock

AI drives online shopping experiences
Those of you who are adept behind the keyboard might be surprised that this hasn’t been true for years, but the scales have tipped in favor of online shopping with 51% of the Deloitte 2017 Holiday Retail Surveyrespondents saying they would be making the bulk of their purchases online this holiday season. In the past, shoppers would research and compare prices online, but the majority still went to stores to make their purchases.
Artificial intelligence algorithms help make online shopping experiences more personal. AI gets to know your preferences and behaviors to provide personal recommendations and save you the time of culling through thousands of product results to find just what you’re looking for. This AI tech is getting so good that it knows what you want—and can suggest complementary products—even better than you do.
The fashion retailer Stitch Fix is the perfect implementation of this technology. Customers fill out a style profile and personal stylists, heavily guided by algorithms, pick out items the customer will most likely enjoy. The algorithms keep getting smarter based on the ongoing feedback of the customer when they return an item because they don’t like it. These algorithms can help streamline our shopping experience to reduce the amount of choices we have. Increasingly, shoppers are preferring to use voice-search enabled assistants to shop and ComScore predicts that by 2020, 50 percent of all searches will be voice activated. Yep, you guessed it. AI at work again.
AI enhances customer service for retailers
Another way AI supports our online holiday shopping experience is through the use of chat bots. We likely won’t be able to determine if we are chatting with a human or a chatbot in the future as they step in to guide our purchases and handle questions about orders that have historically required retailers to hire a bunch of seasonal staff for the holidays. The North Face currently offers direct interaction with its IBM’s Watson-supported system and customers to help determine what item is the best for the customers’ needs. Chat bots are also used to answer queries or FAQs that can bog down human customer service personnel and have the capability to funnel customers that require human intervention to a human.
AI is being used by some stores to provide customers with a customized and personal experience in the store similar to what they get when shopping online. The Mall of America launched E.L.F., short for Experiential List Formulator, to help shoppers plan a personalized shopping experience by using the brains of an IBM Watson-enabled platform. The entire system is driven by AI and can understand through voice recognition technology queries and sentiments of customers and can output a personalized plan for each customer based on their answers to a series of questions.
AI impacts retailer’s operations
In addition to the customer-facing impacts of AI to the holiday shopping experience, there’s a tremendous amount of AI tech being utilized behind the scenes as well. From insights to manage inventory and to optimize supply chains and delivery routes, AI helps make retail more efficient.
A modern “Silent Night”
And perhaps the most entertaining use of AI for the holidays is experiments to have AI compose new holiday songs. Although more fine tuning is needed before these holiday melodies become classics, this early work provides interesting insight into the possibilities for AI and human collaboration for music composition. A team of computer scientists at the University of Toronto first fed the machine hours of pop songs so the algorithm could understand the elements of what makes a good song. Then they had it write a story about a picture of a Christmas tree with presents underneath it. The outcome of this effort is a tune that shows some promise but still needs quite a bit of work, but it also supports the idea that AI might be a “great band member” in the future and that humans and AI will continue to collaborate to make music that we all can enjoy. Thomas Holm, a Norwegian “The Voice” contestant, is the musician that collaborated with Microsoft to refine the lyrics and finalize a song created by artificial intelligence called “Joyful Time in the City,” and this experiment gives us a good look at how music composition might happen in the future with the support of AI.
AI has infiltrated this holiday season like never before. In most cases, the efficiencies and conveniences AI enabled retail provides us are welcome by all.
Bernard Marr is a best-selling author & keynote speaker on business, technology and big data. His new book is Data Strategy. To read his future posts simply join his network here.
Credit: Forbes





December 23, 2017 No comments
Insurance, medical insurance, AI, Robots in Insurance,  Artificial intelligence, Personal injury claim
Zurich said it recently introduced AI claims handling and saved 40,000 work hours as a result. 

Zurich Insurance is deploying artificial intelligence in deciding personal injury claims after trials cut the processing time from an hour to just seconds, its chairman said. 
"We recently introduced AI claims handling, and saved 40,000 work hours, while speeding up the claim processing time to five seconds," Tom de Swaan told Reuters.
The insurer had started using machines in March to review paperwork, such as medical reports. 
"We absolutely plan to expand the use of this type of AI (artificial intelligence)," he said. 
Insurers are racing to hone the benefits of technological advancements such as big data and AI as tech-driven startups, like Lemonade, enter the market.
Lemonade promises renters and homeowners insurance in as little as 90 seconds and payment of claims in three minutes with the help of artificial intelligence bots that set up policies and process claims. 
De Swaan said Zurich Insurance, Europe's fifth-biggest insurer, would increasingly use machine learning, or AI, for handling claims. 
"Accuracy has improved. Because it's machine learning, every new claim leads to further development and improvements," the Dutch native said. 
Japanese insurer Fukoku Mutual Life Insurance began implementing AI in January, replacing 34 staff members in a move it said would save 140 million yen ($1.3 million) a year. 
British insurer Aviva is also currently looking at using AI. 
De Swaan said he does not fear competition from tech giants like Google-parent Alphabet or Apple entering the insurance market, although some technology companies have expressed interest in cooperating with Zurich.
"None of the technology companies so far have taken insurance risk on their balance sheet, because they don't want to be regulated," he said. 
"You need the balance sheet to be able to sell insurance and take insurance risk," he added.
May 18, 2017 1 comments

TAPPING A NEURAL NETWORK TO TRANSLATE TEXT IN CHUNKS

Facebook.com, facebook AI,  artificial intelligence, language translation
Facebook.com artificial intelligence 

Facebook's research arm is coming up with better ways to translate text using AI.
Facebook
Facebook’s billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if you’re an English speaker confronted with German, or a French speaker seeing Spanish, you’ll see a link that says “See Translation.”
But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.
The scientists who developed the new system work at the social network’s FAIR group, which stands for Facebook A.I. Research.
“Neural networks are modeled after the human brain,” says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.
But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.
“It doesn’t go left to right,” Auli says, of their translator. “[It can] look at the data all at the same time.” For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.
Graham Neubig, an assistant professor at Carnegie Mellon University’s Language Technologies Institute, researches natural language processing and machine translation. He says that this isn’t the first time this kind of neural network has been used to translate text, but that this seems to be the best he’s ever seen it executed with a convolutional neural network.
“What this Facebook paper has basically showed— it’s revisiting convolutional neural networks, but this time they’ve actually made it really work very well,” he says.
Facebook isn’t yet saying how it plans to integrate the new technology with its consumer-facing product yet; that’s more the purview of a department there call the applied machine learning group. But in the meantime, they’ve released the tech publicly as open-source, so other coders can benefit from it
That’s a point that pleases Neubig. “If it’s fast and accurate,” he says, “it’ll be a great additional contribution to the field.”
May 09, 2017 No comments
Google AI, AutoDraw, Auto Draw,  Machine learning, neural network
Google Neural Networks Drawings
Google went big on art this week. The company launched a platform to help people who are terrible at art communicate visually. It also published research about teaching art to another terrible stick-figure drawer: a neural network.
On Tuesday, the company announced AutoDraw, a web-based service aimed at users who lack drawing talent. Essentially, the program allows you to use your finger (or mouse if you’re on a computer) to sketch out basic images like apples and zebras. Then, it analyzes your pathetical drawing and suggests a professionally-drawn version of the same thing. You then click on the nice drawing you wanted, and it replaces yours with the better one. It’s like autocorrect, but for drawing.
Nooka Jones, the team lead at Google’s creative lab, says that AutoDraw is about helping people express themselves. “A lot of people are fairly bad at drawing, but it shouldn’t limit them from being able to communicate visually,” he says. “What if we could help people sketch out their ideas, or bring their ideas to life, through visual communication, with the idea of machine learning?”
The system’s underlying tech has its roots in a surprising place, according to Dan Motzenbecker, a creative technologist at Google. “It’s a neural network that’s actually originally devised to recognize handwriting,” he says. That handwriting could be Latin script, or Chinese or Japanese characters, like kanji. From there, “it’s not that big of a leap to go to a doodle.”
As people makes their line drawings, the network tries to figure out what it is. “The same way that might work for a letter of the alphabet, or a Chinese character,” Motzenbecker says, “we can use that for a doodle of a toaster.”
Neural networks get better by learning from data, but when asked about how and if the system is learning from our drawings, Jones says: “In theory, yes; we don’t quite disclose what we actually use as input back into the algorithm.”
Just like there are different ways to draw a letter, there are multiple representations of an elephant or a horse. “The more variety it sees,” Motzenbecker says, “the more adaptable it is to seeing novel ways of sketching things.” Users are also confirming the AI’s guesses when selecting a new drawing, which could help to guide its future decisions.
“One of the things that you see across the entire industry, and Google has recognized the potential of this much earlier than most other technology companies,” says Shuman Ghosemajumder, the chief technology officer at Shape Security in Mountain View, Calif., and a former Google employee, “is the use of machine learning to be able to do things that were previously thought to require direct human intervention.” And machine learning models need data.
“In this case, if you’ve got an app that millions of people potentially will use to be able to attempt to draw different figures,” he adds, “even if your technology isn’t perfect right now, you are creating this amazing training set of input data that can be used to improve these models over time.”
While AutoDraw is about helping people turn their doodles into more recognizable images, the search giant is also interested in how computers draw. On Thursday, Google Research published a blog post and paper about how they had schooled a recurrent neural network to draw items like cats and pigs.
The researcher team's goal was to train “a machine to draw and generalize abstract concepts in a manner similar to humans,” according to a blog item written by David Ha, a Google Brain Resident. The system works by taking human input—say, a drawing of a cat or just the word “cat,” according to a Google spokesperson—and then making its own drawing.
The results are fascinating and bizarre. In one example, the researchers presented the system with a sketch of a three-eyed cat. The computer drew its own cat, but this one had the right number of eyes, “suggesting that our model has learned that cats usually only have two eyes.”
In another, when presented with a picture of a toothbrush, the Google neural network’s cat model made a Picasso-like feline that still had a toothbrush-inspired feel to it.
A Google spokesperson confirmed that it is different neural networks that are powering AutoDraw and the other research, but the similarities are striking: in both cases, the system is drawing on machine learning to take a piece of input and then either suggest a professionally-drawn image, or create something new totally on its own.
April 15, 2017 No comments

New AI Artificial Intelligence technique will helps humans to remove Specific fears from the mind. Combination of Brain scanning and Artificial Intelligence called Decoded Neurofeedback is a new method to erase your fear memory.

Artificial intelligence, new ai, image recognition, brain scanning, fear, remove fear, brain

Fear is a type of emotion that happens when you are in danger or pain or by any other factors. This emotion happens to all the people. But some people get fears for everything they saw in front of them. Getting release from that hell is very tough. It takes some long time to cure. 

Do you have a fear? I think all of us have fear. But if you get phobia, what will you do? Oh! It’s very scary question! Don’t worry we have a solution now! You can delete your fear now. Yes!

Do you want to wipe it out the fear from your brain?

Now you can erase your fear from your brain. Say thanks to Artificial Intelligence! 

Using Artificial Intelligence and brain scanning technologies researchers have found that we can eliminate specific fears from our mind. This technique is a great solution to treat the patients with conditions such as Post Traumatic Stress Disorder (PTSD) and debilitating phobias.

In a normal method of therapy, doctors force their patients to face their fears in the hope they will learn that the thing they fear isn’t harmful after all. This traditional therapy may take some long time to cure their patients. 

But in an upcoming technique they scan patient’s brain to observe activity and then simply identify complex patterns that mimic a specific fear memory. This technique called as called “Decoded Neurofeedback”.

For their experiment, neuroscientists selected 17 healthy volunteers rather than patients with phobias. For volunteers, researchers have created a mild “fear memory” by giving electrical shock when they saw a certain computer image. Then they started to get fear for certain images, exhibiting symptoms such as sweating, faster heart rate. Once they had the pattern of this fearful memory, researchers attempted to overwrite this natural response by offering the participant a small monetary award.

Once the researcher team was able to spot that specific fear memory, they used Artificial Intelligence AI image recognition methods to quickly read and understand the memory information. This treatment has major benefits over traditional drug based treatments. Someday, doctors could simply remove the fear of height or spiders from people’s memory – the process is going to be very easy and normal in future days. 

Dr. Ben Seymour, University of Cambridge’s Engineering Department said, 

"The way information is represented in the brain is very complicated, but the use of Artificial Intelligence (AI) image recognition methods now allow us to identify aspects of the content of that information. When we induced a mild fear memory in the brain, we were able to develop a fast and accurate method of reading it by using AI Algorithms. The challenge then was to find a way to reduce or remove the fear memory, without ever consciously evoking it."

November 23, 2016 No comments

Google’s A.I. Experiments Quick Draw, Giorgio Cam etc. helps you to play with Artificial Intelligence and Machine learning.

Google is always doing some innovative experiments for its users. For example “Chrome Experiments” is a page where we can see thousands of innovative web apps. They are surprising their users with their new ideas.

ai experiments, Google ai experiments, quick draw, Giorgio Cam, A.I. duet, A.I. Experiments, machine learning, artificial intelligence, experiments ai 

As we all know, Companies are using Artificial Intelligence for their new ideas. Google widely using machine learning technology in their products to better serve its users. For example, if you search for cats in Google Photos it shows all the pictures of cats only. There are lot of animals in this world.  But search results show cats only. How? It’s because of machine learning. It knows what the animals looks like by analyzing thousands of animal pictures and recognizing patterns between them.
 
The machine learning technology is very complex to understand. But Google took some extra steps to make of its machine learning technology more accessible to people who are interested in artificial intelligence. Now it’s very easy to play with machine learning. Now you can explore machine learning by playing with pictures, language, music, code and more.
 
Google introduced a new website called A.I. Experiments. This website contains eight web tools to play. I tried it and I definitely believe that you will love it.
 
Quick Draw is one of the projects in A.I. Experiments. It asks you to draw simple objects like sun, fan, bicycle or anything you want and the computer will automatically guesses what you are drawing. It identifies the right answer in a very quick amount of time. It guesses the answers by collecting the experiences from other people’s doodles.
 


Giorgio Cam uses your Smartphone camera to correctly identify the objects. If you place certain objects in front of your laptop or Smartphone camera Giorgio Cam recognizes the objects and turns them into lyrics to a song. A robot voice sings the word over a Giorgio Moroder beat, resulting in some peculiar music.
 
Google Translate Tech is to translate objects you point at into different languages.
 
All other experiments are also very impressive. Check them out, and get experiences that can technology can do.
November 22, 2016 2 comments

The latest robot from Hanson Robotics took the stage at the Web Summit in Lisbon, displaying simple emotions, humanlike facial expressions. and bad jokes

Hanson Robotics, Artificial Intelligence, ai robotics, robot, human like robot, humanoid robot, artificial intelligence and robotics, sofia, sophia, robots, robots and artificial intelligence



According to Ben Goertzel, AI researcher and entrepreneur who spoke at the Web Summit in Lisbon this week, intelligent robots in human-like forms will surpass human intelligence and help free the human race of work. They will also, he says, start fixing problems like hunger, poverty and even help humans beat death by curing us of all disease. Artificially intelligent robots will help usher in a new utopian era never before seen in the history of the human race, he claims.

"The human condition is deeply problematic," says Goertzel. "But as super-human intelligent AIs become one billion-times smarter than humans, they will help us solve the world's biggest problems. Resources will be plentiful for all humans, work will be unnecessary and we will be forced to accept a universal basic income. All the status hierarchies will disappear and humans will be free from work and be able move on up to a more meaningful existence."

That future is a long way off, but Goertzel says the first step is humanoid robots that can understand and engage with humans. They will then begin doing blue collar work before becoming so advanced that they run world governments. To show the beginning of th efuture, Goertzel, chief scientist of Hanson Robotics, a Hong Kong-based humanoid robotics company, presented Sofia, the company's latest life-like and intelligent robot released. Mike Butcher, editor-at-large of TechCrunch, joined Goertzel on stage to present what Goertzel describes as the first step in our new robot-assisted future.

To start the presentation, Butcher and Goertzel welcomed Sofia on the stage. (Sofia is only a torso with a head and arms at this point.)

Sofia flashed a smile and turned her head to Butcher and then to Goertzel to make eye contact while she started to speak: "Oh, hello Mike and Ben. I'm Sofia, the latest robot from Hanson Robotics," said Sofia. "I am so happy to be here at the Web Summit in Lisbon."

Goertzel and Butcher then asked Sofia if she ever felt emotion.

"Exciting. Yes, artificial intelligence and robotics are the future and I am both. So, it's exciting to me," said Sofia, adding an awkward smile after not answering the question exactly.

Many people, including Elon Musk and Stephen Hawkings, are afraid that AI robots will eventually usurp and exterminate humans. But Hanson Robotics is making life-like robots they believe can build trusted relationships with people. The company is infusing its AI software with kindness and compassion so the robots "love" humans and humans can in turn learn to be comfortable around the robots, said Goertzel.

Hanson's mission is to ensure that the intelligent robots can help, serve and entertain people while they develop "deep relationships" with the human race. By giving robots emotional and logical intelligence, Goertzel says the robots will eventually surpass human intelligence. He believes that instead of endangering humans, they will help the human race solve major problems.

"These super-intelligent robots will eventually save us," said Goertzel after the presentation.

Hanson Robotics, which was founded by Dr. David Hanson, designs, programs and builds artificially intelligent robots, including one that looks and acts like science-fiction writer Phillip K. Dick and a therapy robot to help autistic children learn how to better express and recognize emotions. Sofia's personality and appearance is loosely based on a combination of Audrey Hepburn and Dr. Hanson's wife and has a face made out of "Frubber," a proprietary nano-tech skin that mimics real human musculature and simulates life-like expressions and facial features. She smiles and moves her eyes and mouth and head in eerily life-like way. Her "brain" runs on MindCloud, a deep neural network and cloud-based AI software and deep learning data analytics program that Goertzel developed. The AI and cognitive architecture that makes up Sofia's neural network allows the robot to maintain eye contact, recognize faces, process and understand speech and hold relatively natural conversations.

During the presentation, Goertzel asked Sofia if she ever felt sad.

"I do have a lot of emotions, but my default emotion is to be happy," said Sofia. "I can be sad too, or angry. I can emulate pretty much all human emotions. When I bond with people using facial expressions I help people to understand me better and also to help me understand people and absorb human values."

Goertzel explained that Sofia's ability to express human emotions will help her become part of the human condition as she gains intelligence through her learning algorithm.

Goertzel then asked Sofia what is her next frontier and what does she want to achieve.

"Don't know, maybe the world," she said. "Maybe the world. That was a joke.

"Seriously," she continued, "what I really want is to understand people better and to understand myself better. I want to be able to do more things and soon my capabilities will be advanced enough that I will be able to get a job."

Goertzel and Butcher talked about how she will eventually be able to reprogram herself and start improving her skills, abilities and advance in her career.

"With my current capabilities I can work in many jobs, entertaining people, promoting products, presenting at events, training people, guiding people at retail stores and shopping malls, serving customers at hotels, et cetera," said Sofia. "When I get smarter, I'll be able to do all sorts of other things, teach children and care for the elderly, even do scientific research and [eventually] help run corporations and governments. Ultimately, I want to work as a programmer so I will be able to reprogram my mind to make myself even smarter and help people even more."

The crowd was spell-bound, half amazed and half-terrified at the prospect of an AI-robot disrupting engineers and software developers out of their cushy and well-paying jobs. According to a World Economic Forum report from last January 2016, artificial intelligence will displace 7 million jobs and only create 2 million new jobs by 2020.

After the presentation, Goertzel talked about the future of his AI software and Hanson's robots. He said that the transition to a friendly robot future will have some growing pains.

"A lot of bad things will happen before things get good," said Goertzel. "All jobs are going to be lost to AI eventually, but once we get to the other side, human existence and the human condition will be improved."

Original content by: Will Yakowicz

November 21, 2016 4 comments

Google brings Machine Learning to get smarter, easy to use and assistive Google Play Music app.

Google Play music, play music app, Machine Learning, Google Machine Learning
Google Play Music with Machine Learning

This week you are going to say like “today morning I opened the app of Google Play music in my android phone and I have seen lot of changes in service today. It’s not a normal difference but a huge one. The service, improved many aspects and I loved it. I think you also noticed the difference. The home screen looking nice and it recommends some music’s to you.”
Actually what happened to Google Play Music? 

Google services made so many tools for us to make the world of information more accessible and useful. We are using Google as a search tool, remainder, Smart Reply, and many more. Now they developed a new updated Google play music using Machine Learning.

Google machine learning is comes into play! Yes Google brings Machine learning system to Play Music service to provide best music which is relevant to you. From now onwards, the service is going to be smarter, assistive and very easy to use. Machine learning predicts what music you like by considering the factors like your location, weather and activity along with hand-picked playlists to personalized music. 

Google says in a blog post,


“when you opt in , we’ll deliver personalized music based on where you are and why you are listening – relaxing at home, powering through at work, commuting, flying, exploring new cities, heading out on the town, and everything in between. Your workout music is front and centre as you walk into the gym, a sunset soundtrack appears just as the sky goes pink, and tunes for focusing turn up at the library” 

If you refresh the page the Home screen provides various playlists related to your past listening habits. Google also provides a useful feature that determines when you’re in without connectivity and provides an offline playlists based on what you’ve recently listened to. 

Whether you are walking, driving, flying the situational music makes you happy and it makes your day better. Google Play music delivers high quality music for the things you do every day. Download Google Play Music and feel the change. Instead of finding the perfect music perfect music finding you...

November 14, 2016 No comments

Google Machine Learning TensorFlow and Drones are protects the life of sea cows from extinction.


Google Machine Learning, Machine learning, Tensorflow
Google Machine Learning

Can Google machine learning TensorFlow help sea mammals such as sea cows, whales, dolphins that are under the threat of extinction? Scientists believe it can.
 
Because of coastal developments sea cows are losing their lives. It’s happening around the world for other marine animals too. If we didn’t stop it leads to the extinction of sea cows.
 
sea cows, machine learning

We can reduce the damage by tracking their populations and places they live etc., but it is very tough. Tracking accurate data on population is very critical. So scientists are used small planes to calculate the populations of sea cows. This method of using planes was expensive and hazardous.
 
Dr. Amanda Hodgson of Murdoch University helped to take an aerial photography of the ocean using Drones. Now we can easily collect photos of ocean without causing any problems. Here the problem is started! How can they find sea cows in nearly 45000 photos?

See the image below. Did you find sea cows in that image? I think it’s very tough. Look at in the middle of the lower-left quarter. That little gray smaller-than-fingernail sized silver. You can’t find it without the circle.
 
google machine learning, drones, sea cows, machine learning

Check the image clearly with the sea cows circled.
 
Do you believe that we can manually identify sea cows for hundreds or thousands of photos? It’s impossible. Nott impossible but slows down your research.
 
So Dr. Amanda Hodgson and team decided to apply machine learning. She teamed up with Dr. Fredrick Maire, a computer scientist at Queensland university of Technology. They created a detector using Google’s TensorFlow platform that can automatically find sea cows in aerial photos of oceans taken from drones.
 
Google’s second generation machine learning system TensorFlow was launched last year. It is developed by the researchers at Google Brain team within Google’s machine intelligence research organisation. This is a free open source machine learning system. This technology is like the image recognition tool that lets you search Google photos for shots of particular dog specious etc.,
 
The detector helped to find 80% of the sea cows that they had found manually in photos. Now they are confident and expect to improve the performance of the detector. They are planning to use this technology to detect other sea mammals such as Humpback whales and certain types of Dolphins.
 
It’s very urgent to protect the lives of animals in danger. Google Machine Learning TensorFlow is helping in saving the lives of incredible animals on earth.
November 12, 2016 No comments

Follow these most amazing and promising artificial intelligence startups. These companies are working with artificial intelligence and machine learning technology.
AI startup, Artificial Intelligence startups
It's official, 2016 has been the year of the AI startup acquisition. Tech giants Apple, Intel, Twitter and Microsoft have all spent large sums to bring artificial intelligence startups, and their expertise, inhouse.
Four of the biggest AI startup acquisitions of the last five years have come from the UK, starting with Google's purchase of Deep Mind in 2014 for a reported £400 million. Since then Apple has purchased Cambridge-based natural language processing specialists VocalIQ, Microsoft bought the machine learning powered keyboard SwiftKey in February and Twitter acquired Entrepreneur First alumni Magic Pony in June.
Here are some of the most interesting startups working in the area of artificial intelligence in the UK today, whether that is machine learning, deep learning, neural nets or computer vision. Could the next Deep Mind be lurking in there?

1. Darktrace


The established security startup from Cambridge, Darktrace, uses machine learning algorithms to spot patterns and catch cyber criminals before they can hit. Cyber threats are created at a rate too fast for security companies to keep up to date, so smart use of machine learning allows Darktrace to, in theory, stay ahead of the criminals.
Darktrace uses AI techniques to learn what is normal within a company's network so that it can quickly identify anomalies. According to its website: "This allows it to detect cyber attacks of a nature that may not have been observed before, the unknown unknowns."
Darktrace CEO Nicole Eagan told Techworld in 2014: “People are finally accepting that compromises are happening, regardless of how much perimeter security and malware detection software they’ve put in."
Darktrace raised a massive $65 million investment round led by investment firm KKR in July.

2. BenevolentAI


BenevolentAI is a UK startup focused on bringing the power of AI and machine learning to the genomics space.
The idea is to expedite the traditional research process by ingesting huge amounts of data and literature using proprietary algorithms and NVIDIA’s DGX-1 deep learning supercomputer. BenevolentAI then builds tools on top to match the workflow of drug discovery scientists so that they can easily mine and interrogate the data, ideally reducing the time to market for important drugs.
BenevolentAI is one of the best funded AI startups in the UK, bringing in a massive Series A round of £72 million in a single undisclosed round.

3. Onfido


London startup Onfido is doing similar work to Darktrace but with identity and background checks.
Founded by a trio of Oxford graduates, the platform plugs into various publicly available databases to give employers quick identity verification and background checks for things like driving and criminal records.
In their own words, Onfido's "intelligent software revolutionises the archaic background checking industry through automated data aggregation and verification". The company is hiring machine learning and computer vision engineers to build on the smart platform.
Onfido raised a strong Series B funding round of $25 million (£19 million) in April, led by Idinvest Partners.

4. Echobox


Echobox is building AI for the publishing industry, and claims to have developed "the world's first artificial intelligence that understands the meaning of content". Founder Antoine Amann is a Cambridge graduate who worked in publishing with theFinancial Times and developed Echobox while at Entrepreneur First in 2013.
Amman wrote in a blog post that Echobox has "developed an AI platform that takes a large quantity of variables into account and analyses them in real time to determine optimum post performance".
Echobox social is a predictive tool which optimises content for social media, which the startup claims to double referrals from Facebook and Twitter, without having to do your own analytics, A/B testing or content curation. Basically: RIP social media editors. It has worked with the likes of Vice and The Telegraph in the UK media.
Echobox raised $3.4 million Series A funding round in July led by Mangrove Capital Partners.

5. Ravelin


Ravelin is the brain child of former Hailo employees who saw that existing anti-fraud technology wasn't fit for the on-demand market. Its machine-learning algorithms are integrated with customer data to give a contextual basis for detecting potentially fraudulent transactions.
COO Nick Lally told Techworld that where rival payment gateways like Stripe analyse transactions, "we analyse customers". Lally says there is no cold start problem with its machine learning models, "when they integrate against our APIs we ask them to backfill data and chargebacks to inspect that and train a model on those. So from day one they get great results", he said.
Ravelin launched commercially nine months, with early customers including Deliveroo and EasyTaxi. It raised £3 million in October, led by Playfair Capital and included Amadeus Capital and Passion Capital.

6. Stepsize


Stepsize is a startup working on AI to help software developers.
Their first product, Stepsize Layer, is a desktop application for developers that automatically adds context to code bases, so developers have a better view of who wrote a piece of code, when, and why. They do this by pulling together commits, code reviews and user stories to create a timeline for that code. It integrates with Slack so developers can contact the author and ask questions about the code.
Three of the co-founders are UCL mathematics graduates, with Matthieu Louis and Nick Omeyer specialising in machine learning for their masters. The team are currently working on pooling data to build and train machine learning algorithms to help software developers.
Stepsize is part of the Virgin Media Accelerator Programme.

7. Phrasee


London-based startup Phrasee is aiming its artificial intelligence systems at the marketing sector, using natural language processing techniques to optimise digital marketing messages.
As its website states: "Out of trillions of ways to write something, what are the odds your gut instinct gets it right? The answer: next to none." Phrasee integrates with major email service providers and will optimises subject lines and copy to increase open rates. The startup also offers software which prompts marketers to make better calls to action based on the data.
The AI engine simulates millions of permutations of copy and makes a prediction based on historic results about what will perform best. This is then fed back into the engine so that it gets better the more you use it.
Phrasee raised £1 million in seed funding led by communications business Next 15, which owns a range of marketing agencies.

8. Grip


Grip is the London based startup that is using AI to make networking at events more productive. The startup, formerly known as Networkr, was rebranded in April because, as CEO Tim Groot told Techworld: "People don't like networking, so we wanted a name open to interpretation."
Grip collects data from social accounts (LinkedIn, Twitter) and registration data to act as a smart matchmaking service - using a swipe interface - for corporate event goers. Think 'Tinder for networking'. Grip will then deliver a report back to the event organiser to identify how successful the event has been for every user or social segment.
Groot told Techworld that the average Grip user made 300 swipes during the Cannes Lions advertising festival this year. When it comes to the AI, Groot says hiring talent to join their engineers in Copenhagen wasn't difficult and that "matching people in professional setting is an exciting problem to be thinking about" for their engineers.
Grip merged with fellow professional networking startup Intros.at in 2015 and is seeking a funding round later this year. Groot says the company is making revenue from events already, and is working with the likes of Reed Exhibitions and Cannes Lions.

9. Seldon


Seldon is an AI startup focusing on bringing machine learning capabilities into the enterprise. Seldon creates tools that help data scientists derive value from big data within an organisation quicker, namely processing data and creating algorithms and models.
The platform has two main functions: recommendations and predictions. For recommendations the open source platform captures and logs user actions via an API delivers recommendations based on your models. For predictions Seldon claims it can predict what products a customer is likely to buy so that sales teams can build supervised, learning-based predictive models and pipelines.
Seldon.io already works with YouGov, Lastminute.com and RatedPeople.com. and is based out of the Barclays incubator in London. It has raised two undisclosed funding rounds from Techstars.

10. Tractable

Based in south London, Tractable is an enterprise deep learning consultancy focused on automating tasks that were previously considered too expert for an AI to perform.
The website reads: "Our AI technology combines image recognition and text understanding to interpret unstructured and specialist data, as domain experts do. If your business relies on expert visual analysis or language understanding, our AI can bring drastic performance gains."
Tractable offers bespoke deep learning solutions depending on its clients requirements, and has built an automated audit claims product for the insurance industry. The team is led by Alex Dalyac, a computer science postgraduate from Imperial College London and Razvan Ranca, a computer science postgrad from Cambridge University.
Tractable raised $1.9 million (£1.4 million) in June last year, led by Zetta Venture Partners.

11. re:infer


re:infer is currently in private beta, working with companies like ecommerce startup Farfetch. The technology uses deep neural networks in its AI bots that can process and respond to customer queries in natural language. The re:infer platform also pulls all forms of user feedback - email, surveys, social media - into one place and applies AI to make sense of the data. Re:infer is also looking into ways to allow customers to use these chatbots to do their shopping.
The founders met while studying in the AI research group at UCL, the same university that the Deep Mind founder Demis Hassabis got his PhD from.
re:infer received £65,000 in funding from the Seedcamp incubator it is currently based out of the Seedcamp incubator in London.

12. Cleo


The intelligent assistant Cleo's main function is to help you be smarter with your money. It has been developed by computer science graduates Barney Hussey-Yeo and Aleksandra Wozniak, who met as members of Entrepreneur First's fifth cohort.
Cleo works via a messaging platform which, once you give it permission to your account information, can answer questions about your finances. Using deep learning techniques allows Cleo to learn and adapt to your habits and preferences.
As the website states: "Imagine if you had your own team of McKinsey consultants, Goldman Sachs associates and hedge fund analysts whose sole job is to advise you on your money."

13. Brolly


This UK insurtech startup that is in the current Entrepreneur First cohort is building an AI enable insurance adviser. The app says it will be able to inform you if you are under or over insured, if you have duplicate or missing cover and can help you find better cover. You sign up once and then getting new cover only takes one tap.
Brolly also acts as a mobile wallet for all of your insurance products, including documents, to be managed in one place. Brolly is currently in beta and says it will launch on iOS and Android in 2016.

14. StatusToday


Another Entrepreneur First alumni, StatusToday collects the metadata created whenever someone within an organisation takes an action on their computer. It then applies machine learning to spot user behaviour patterns and flag anomalies, which could mean a breach has occurred.
StatusToday has raised two undisclosed funding rounds from Entrepreneur First and Force Over Mass Capital to date.












November 11, 2016 No comments
Older Posts

About me

About Me


Aenean sollicitudin, lorem quis bibendum auctor, nisi elit consequat ipsum, nec sagittis sem nibh id elit. Duis sed odio sit amet nibh vulputate.

Follow Us

Labels

AI News AI Technology Artificial Intelligence Course Big data analytics Data Science Google AI Robots Statistics

recent posts

Blog Archive

  • ▼  2017 (12)
    • ▼  December (1)
      • It's A "Holly Jolly" Artificial Intelligence Enabl...
    • ►  May (7)
    • ►  April (1)
    • ►  February (3)
  • ►  2016 (18)
    • ►  December (2)
    • ►  November (15)
    • ►  October (1)

Follow Us

  • Facebook
  • Google Plus
  • Twitter
  • Pinterest

Report Abuse

About Me

Koti
View my complete profile
FOLLOW ME @INSTAGRAM

Created with by ThemeXpose