facebook google twitter tumblr instagram linkedin

Search This Blog

Powered by Blogger.
  • Home
  • About Me!!!
  • Privacy Policy

It's A "Holly Jolly" Artificial Intelligence Enabled Special Christmas

Did you know there’s a bit of artificial intelligence (AI) magic behind the scenes helping to make your holiday dreams come true? Santa’s ...

  • Home
  • Travel
  • Life Style
    • Category
    • Category
    • Category
  • About
  • Contact
  • Download

Shout Future

Educational blog about Data Science, Business Analytics and Artificial Intelligence.

Insurance, medical insurance, AI, Robots in Insurance,  Artificial intelligence, Personal injury claim
Zurich said it recently introduced AI claims handling and saved 40,000 work hours as a result. 

Zurich Insurance is deploying artificial intelligence in deciding personal injury claims after trials cut the processing time from an hour to just seconds, its chairman said. 
"We recently introduced AI claims handling, and saved 40,000 work hours, while speeding up the claim processing time to five seconds," Tom de Swaan told Reuters.
The insurer had started using machines in March to review paperwork, such as medical reports. 
"We absolutely plan to expand the use of this type of AI (artificial intelligence)," he said. 
Insurers are racing to hone the benefits of technological advancements such as big data and AI as tech-driven startups, like Lemonade, enter the market.
Lemonade promises renters and homeowners insurance in as little as 90 seconds and payment of claims in three minutes with the help of artificial intelligence bots that set up policies and process claims. 
De Swaan said Zurich Insurance, Europe's fifth-biggest insurer, would increasingly use machine learning, or AI, for handling claims. 
"Accuracy has improved. Because it's machine learning, every new claim leads to further development and improvements," the Dutch native said. 
Japanese insurer Fukoku Mutual Life Insurance began implementing AI in January, replacing 34 staff members in a move it said would save 140 million yen ($1.3 million) a year. 
British insurer Aviva is also currently looking at using AI. 
De Swaan said he does not fear competition from tech giants like Google-parent Alphabet or Apple entering the insurance market, although some technology companies have expressed interest in cooperating with Zurich.
"None of the technology companies so far have taken insurance risk on their balance sheet, because they don't want to be regulated," he said. 
"You need the balance sheet to be able to sell insurance and take insurance risk," he added.
May 18, 2017 1 comments
AI, Artificial intelligence, AI Artificial Intelligence, AI Lab,

China's first national laboratory for brain-like artificial intelligence (AI) technology was inaugurated Saturday to pool the country's top research talent and boost the technology. China’s rapid rise up the ranks of AI research has the world's scientific community taking notice. In October, the Obama White House released a “strategic plan” for AI research, which noted that the U.S. no longer leads the world in journal articles on “deep learning,” a particularly hot subset of AI research right now. The country that had overtaken the U.S.? China, of course.
“I have a hard time thinking of an industry we cannot transform with AI,” says Andrew Ng, chief scientist at Baidu. Ng previously cofounded Coursera and Google Brain, the company’s deep learning project. Now he directs Baidu’s AI research out of Sunnyvale, California, right in Silicon Valley.
“China has a fairly deep awareness of what’s happening in the English-speaking world, but the opposite is not true,” says Ng. He points out that Baidu has rolled out neural network-based machine translation and achieved speech recognition accuracy that surpassed humans—but when Google and Microsoft, respectively, did so, the American companies got a lot more publicity. “The velocity of work is much faster in China than in most of Silicon Valley,” says Ng Approved by the National Development and Reform Commission in January, the lab, based in China University of Science and Technology (USTC), aims to develop a brain-like computing paradigm and applications.
The university, known for its leading role in developing quantum communication technology, hosts the national lab in collaboration with a number of the country's top research bodies such as Fudan University, Shenyang Institute of Automation of the Chinese Academy of Sciences as well as Baidu, operator of China's biggest online search engine.
Wan Lijun, president of USTC and chairman of the national lab, said the ability to mimic the human brain's ability in sorting out information will help build a complete AI technology development paradigm. The lab will carry out research to guide machine learning such as recognizing messages and using visual neural networks to solve problems. It will also focus on developing new applications with technological achievements.

May 15, 2017 3 comments

TAPPING A NEURAL NETWORK TO TRANSLATE TEXT IN CHUNKS

Facebook.com, facebook AI,  artificial intelligence, language translation
Facebook.com artificial intelligence 

Facebook's research arm is coming up with better ways to translate text using AI.
Facebook
Facebook’s billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if you’re an English speaker confronted with German, or a French speaker seeing Spanish, you’ll see a link that says “See Translation.”
But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.
The scientists who developed the new system work at the social network’s FAIR group, which stands for Facebook A.I. Research.
“Neural networks are modeled after the human brain,” says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.
But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.
“It doesn’t go left to right,” Auli says, of their translator. “[It can] look at the data all at the same time.” For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.
Graham Neubig, an assistant professor at Carnegie Mellon University’s Language Technologies Institute, researches natural language processing and machine translation. He says that this isn’t the first time this kind of neural network has been used to translate text, but that this seems to be the best he’s ever seen it executed with a convolutional neural network.
“What this Facebook paper has basically showed— it’s revisiting convolutional neural networks, but this time they’ve actually made it really work very well,” he says.
Facebook isn’t yet saying how it plans to integrate the new technology with its consumer-facing product yet; that’s more the purview of a department there call the applied machine learning group. But in the meantime, they’ve released the tech publicly as open-source, so other coders can benefit from it
That’s a point that pleases Neubig. “If it’s fast and accurate,” he says, “it’ll be a great additional contribution to the field.”
May 09, 2017 No comments
In this industry, it's a tired old cliche to say that we're building the future. But that's true now more than at any time since the Industrial Revolution. The proliferation of personal computers, laptops, and cell phones has changed our lives, but by replacing or augmenting systems that were already in place. Email supplanted the post office; online shopping replaced the local department store; digital cameras and photo sharing sites such as Flickr pushed out film and bulky, hard-to-share photo albums. AI presents the possibility of changes that are fundamentally more radical: changes in how we work, how we interact with each other, how we police and govern ourselves.

Fear of a mythical "evil AI" derived from reading too much sci-fi won't help. But we do need to ensure that AI works for us rather than against us; we need to think ethically about the systems that we're building. Microsoft's CEO, Satya Nadella, writes:
The debate should be about the values instilled in the people and institutions creating this technology. In his book Machines of Loving Grace, John Markoff writes, 'The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.' It's an intriguing question, and one that our industry must discuss and answer together.
What are our values? And what do we want our values to be? Nadella is deeply right in focusing on discussion. Ethics is about having an intelligent discussion, not about answers, as such—it's about having the tools to think carefully about real-world actions and their effects, not about prescribing what to do in any situation. Discussion leads to values that inform decision-making and action.
The word "ethics" comes from "ethos," which means character: what kind of a person you are. "Morals" comes from "mores," which basically means customs and traditions. If you want rules that tell you what to do in any situation, that's what customs are for. If you want to be the kind of person who executes good judgment in difficult situations, that's ethics. Doing what someone tells you is easy. Exercising good judgement in difficult situations is a much tougher standard.
Exercising good judgement is hard, in part, because we like to believe that a right answer has no bad consequences; but that's not the kind of world we have. We've damaged our sensibilities with medical pamphlets that talk about effects and side effects. There are no side effects; there are just effects, some of which you might not want. All actions have effects. The only question is whether the negative effects outweigh the positive ones. That's a question that doesn't have the same answer every time, and doesn't have to have the same answer for every person. And doing nothing because thinking about the effects makes us uncomfortable is, in fact, doing something.
The effects of most important decisions aren't reversible. You can't undo them. The myth of Pandora's box is right: once the box is opened, you can't put the stuff that comes out back inside. But the myth is right in another way: opening the box is inevitable. It will always be opened; if not by you, by someone else. Therefore, a simple "we shouldn't do this" argument is always dangerous, because someone will inevitably do it, for any possible "this." You may personally decide not to work on a project, but any ethics that assumes people will stay away from forbidden knowledge is a failure. It's far more important to think about what happens after the box has been opened. If we're afraid to do so, we will be the victims of whoever eventually opens the box.
Finally, ethics is about exercising judgement in real-world situations, not contrived situations and hypotheticals. Hypothetical situations are of very limited use, if not actually harmful. Decisions in the real world are always more complex and nuanced. I'm completely uninterested in whether a self-driving car should run over the grandmothers or the babies. An autonomous vehicle that can choose which pedestrian to kill surely has enough control to avoid the accident altogether. The real issue isn't who to kill, where either option forces you into unacceptable positions about the value of human lives, but how to prevent accidents in the first place. Above all, ethics must be realistic, and in our real world, bad things happen.
That's my rather abstract framework for an ethics of AI. I don't want to tell data scientists and AI developers what to do in any given situation. I want to give scientists and engineers tools for thinking about problems. We surely can't predict all the problems and ethical issues in advance; we need to be the kind of people who can have effective discussions about these issues as we anticipate and discover them.

Talking through some issues

What are some of the ethical questions that AI developers and researchers should be thinking about? Even though we're still in the earliest days of AI, we're already seeing important issues rise to the surface: issues about the kinds of people we want to be, and the kind of future we want to build. So, let's look at some situations that made the news.

Pedestrians and passengers

The self-driving car/grandmother versus babies thing is deeply foolish, but there's a variation of it that's very real. Should a self-driving car that's in an accident situation protect its passengers or the people outside the car? That's a question that is already being discussed in corporate board rooms, as it was at Mercedes recently, which decided that the company's duty was to protect the passengers rather than pedestrians. I suspect that Mercedes' decision was driven primarily by accounting and marketing: who will buy a car that will sacrifice the owner to avoid killing a pedestrian? But Mercedes made an argument that's at least ethically plausible: they have more control over what happens to the person inside the car, so better to save the passenger than to roll the dice on the pedestrians. One could also argue that Mercedes has an ethical committent to the passengers, who have put their lives in the hands of their AI systems.
The bigger issue is to design autonomous vehicles that can handle dangerous situations without accidents. That's the real ethical choice. How do you trade off cost, convenience, and safety? It's possible to make cars that are more safe or less safe; AI doesn't change that at all. It's impossible to make a car (or anything else) that's completely safe, at any price. So, the ethics here ultimately come down to a tradeoff between cost and safety, to ourselves and to others. How do we value others? Not grandmothers or babies (who will inevitably be victims, just as they are now, though hopefully in smaller numbers), but passengers and pedestrians, Mercedes' customers and non-customers? The answers to these questions aren't fixed, but they do say something important about who we are.

Crime and punishment

COMPAS is commercial software used in many state courts to recommend prison sentences, bail terms, and parole. In 2016, ProPublica published an excellent article showing that COMPAS consistently scores blacks as greater risks for re-offending than whites who committed similar or more serious crimes.
Although COMPAS has been secretive about the specifics of their software, ProPublica published the data on which their reports were based. Abe Gong, a data scientist, followed up with a multi-part study, using ProPublica's data, showing that the COMPAS results were not "biased." Abe is very specific: he means "biased" in a technical, statistical sense. Statistical bias is a statement about the relationship between the outputs (the risk scores) and the inputs (the data). It has little to do with whether we, as humans, think the outputs are fair.
Abe is by no means an apologist for COMPAS or its developers. As he says, "Powerful algorithms can be harmful and unfair, even when they're unbiased in a strictly technical sense." The results certainly had disproportionate effects that most of us would be uncomfortable with. In other words, they were "biased" in the non-technical sense. "Unfair" is a better word that doesn't bring in the trapping of statistics.
The output of a program reflects the data that goes into it. "Garbage in, garbage out" is a useful truism, especially for systems that build models based on terabytes of training data. Where does that data come from, and does it embody its own biases and prejudices? A program's analysis of the data may be unbiased, but if the data reflects arrests, and if police are more likely to arrest black suspects, while letting whites off with a warning, a statistically unbiased program will necessarily produce unfair results. The program also took into account factors that may be predictive, but that we might consider unfair: is it fair to set a higher bail because the suspect's parents separated soon after birth, or because the suspect didn't have access to higher education?
There's not a lot that we can do about bias in the data: arrest records are what they are, and we can't go back and un-arrest minority citizens. But there are other issues at stake here. As I've said before, I'm much more concerned about what happens behind closed doors than what happens in the open. Cathy O'Neil has frequently argued that secret algorithms and secret data models are the real danger. That's really what COMPAS shows. It is almost impossible to discuss whether a system is unfair if we don't know what the system is and how it works. We don't just need open data; we need to open up the models that are built from the data.
COMPAS demonstrates, first, that we need a discussion about fairness, and what that means. How do we account for the history that has shaped our statistics, a history that was universally unfair to minorities? How do we address bias when our data itself is biased? But we can't answer these questions if we don't also have a discussion about secrecy and openness. Openness isn't just nice; it's an ethical imperative. Only when we understand what the algorithms and the data are doing, can we take the next steps and build systems that are fair, not just statistically unbiased.

Child labor

One of the most penetrating remarks about the history of the internet is that it was "built on child labor." The IPv4 protocol suite, together with the first implementations of that suite, was developed in the 1980s, and was never intended for use as a public, worldwide, commercial network. It was released well before we understood what a 21st century public network would need. The developers couldn't forsee more than a few tens of thousands of computers on the internet; they didn't anticipate that it would be used for commerce, with stringent requirements for security and privacy; putting a system on the internet was difficult, requiring handcrafted static configuration files. Everything was immature; it was "child labor," technological babies doing adult work.
Now that we're in the first stages of deploying AI systems, the stakes are even higher. Technological readiness is an important ethical issue. But like any real ethical issue, it cuts both ways. If the public internet had waited until it was "mature," it probably would never have happened; if it had happened, it would have been an awful bureacratic mess, like the abandoned ISO-OSI protocols, and arguably no less problematic. Unleashing technological children on the world is irresponsible, but preventing those children from growing up is equally irresponsible.
To move that argument to the 21st century: my sense is that Uber is pushing the envelope too hard on autonomous vehicles. And we're likely to pay for that—in vehicles that perhaps aren't as safe as they should be, or that have serious security vulnerabilities. (In contrast, Google is being very careful, and that care may be why they've lost some key people to Uber.) But if you go to the other extreme and wait until autonomous vehicles are "safe" in every respect, you're likely to end up with nothing: the technology will never be deployed. Even if it is deployed, you will inevitably discover risk factors that you didn't forsee, and couldn't have forseen without real experience.
I'm not making an argument about whether autonomous vehicles, or any other AI, are ready to be deployed. I'm willing to discuss that, and if necessary, to disagree. What's more important is to realize that this discussion needs to happen. Readiness itself is an ethical issue, and one that we need to take seriously. Ethics isn't simply a matter of saying that any risk is acceptable, or (on the other hand) that no risk is acceptable. Readiness is an ethical issue precisely because it isn't obvious what the "right" answer is, or whether there is any "right" answer. Is it an "ethical gray area"? Yes, but that's precisely what ethics is about: discussing the gray areas.

The state of surveillance

In a chilling article, The Verge reports that police in Baltimore used a face identification application called Geofeedia, together with photographs shared on Instagram, Facebook, and Twitter, to identify and arrest protesters. The Verge's report is based on a more detailed analysis by the ACLU. Instagram and the other companies quickly terminated Geofeedia's account after the news went public, though they willingly provided the data before it was exposed by the press.
Applications of AI to criminal cases quickly get creepy. We should all be nervous about the consequences of building a surveillance state. People post pictures to Instagram without thinking of the consequences, even when they're at demonstrations. And, while it's easy to say "anything you post should be assumed to be public, so don't post anything that you wouldn't anyone to see," it's difficult, if not impossible, to think about all the contexts in which your posts can be put.
The ACLU suggests putting the burden on the social media companies: social media companies should have "clear, public, and transparent policies to prohibit developers from exploiting user data for surveillance." Unfortunately, this misses the point: just as you can't predict how your posts will be used or interpreted, who knows the applications to which software will be put? If we only have to worry about software that's designed for surveillance, our task is easy. It's more likely, though, that applications designed for innocent purposes, like finding friends in crowds, will become parts of surveillance suites.
The problem isn't so much the use or abuse of individual Facebook and Instagram posts, but the scale that's enabled by AI. People have always seen other people in crowds, and identified them. Law enforcement agencies have always done the same. What AI enables is identification at scale: matching thousands of photos from social media against photos from drivers' license databases, passport databases, and other sources, then taking the results and crossing them with other kinds of records. Suddenly, someone who participates in a demonstration can find themselves facing a summons over an old parking ticket. Data is powerful, and becomes much more powerful when you combine multiple data sources.
We don't want people to be afraid of attending public gatherings, or in terror that someone might take a photo of them. (A prize goes to anyone who can find me on the cover of Time. These things happen.) But it's also unreasonable to expect law enforcement to stick to methodologies from the 80s and earlier: crime has certainly moved on. So, we need to ask some hard questions—and "should law enforcement look at Instagram" is not one of them. How does automated face recognition at scale change the way we relate to each other, and are those changes acceptable to us? Where's the point at which AI becomes harassment? How will law enforcement agencies be held accountable for the use, and abuse, of AI technologies? Those are the ethical questions we need to discuss.

Our AIs are ourselves

Whether it's fear of losing jobs or fear of a superintelligence deciding that humans are no longer necessary, it's always been easy to conjure up fears of artificial intelligence.
But marching to the future in fear isn't going to end well. And unless someone makes some fantastic discoveries about the physics of time, we have no choice but to march into the future. For better or for worse, we will get the AI that we deserve. The bottom line of AI is simple: to build better AI, be better people.
That sounds trite, and it is trite. But it's also true. If we are unwilling to examine our prejudices, we will implement AI systems that are "unfair" even if they're statistically unbiased, merely because we won't have the interest to examine the data on which the system is trained. If we are willing to live under an authoritarian government, we will build AI systems that subject us to constant surveillance: not just through Instagrams of demonstrations, but in every interaction we take part in. If we're slaves to a fantasy of wealth, we won't object to entrepreneurs releasing AI systems before they're ready, nor will we object to autonomous vehicles that preferentially protect the lives of those wealthy enough to afford them.
But if we insist on open, reasoned discussion of the tradeoffs implicit in any technology; if we insist that both AI algorithms and models are open and public; and if we don't deploy technology that is grossly immature, but also don't suppress new technology because we fear it, we'll be able to have a healthy and fruitful relationship with the AIs we develop. We may not get what we want, but we'll be able to live with what we get.
Walt Kelly said it best, back in 1971: "we have met the enemy and he is us." In a nutshell, that's the future of AI. It may be the enemy, but only if we make it so. I have no doubt that AI will be abused and that "evil AI" (whatever that may mean) will exist. As Tim O'Reilly has argued, large parts of our economy are already managed by unintelligent systems that aren't under our control in any meaningful way. But evil AI won't be built by people who think seriously about their actions and the consequences of their actions. We don't need to forsee everything that might happen in the future, and we won't have a future if we refuse to take risks. We don't even need complete agreement on issues such as fairness, surveillance, openness, and safety. We do need to talk about these issues, and to listen to each other carefully and respectfully. If we think seriously about ethical issues and build these discussions into the process of developing AI, we'll come out OK.
To create better AI, we must be better people.
February 18, 2017 No comments

New AI Artificial Intelligence technique will helps humans to remove Specific fears from the mind. Combination of Brain scanning and Artificial Intelligence called Decoded Neurofeedback is a new method to erase your fear memory.

Artificial intelligence, new ai, image recognition, brain scanning, fear, remove fear, brain

Fear is a type of emotion that happens when you are in danger or pain or by any other factors. This emotion happens to all the people. But some people get fears for everything they saw in front of them. Getting release from that hell is very tough. It takes some long time to cure. 

Do you have a fear? I think all of us have fear. But if you get phobia, what will you do? Oh! It’s very scary question! Don’t worry we have a solution now! You can delete your fear now. Yes!

Do you want to wipe it out the fear from your brain?

Now you can erase your fear from your brain. Say thanks to Artificial Intelligence! 

Using Artificial Intelligence and brain scanning technologies researchers have found that we can eliminate specific fears from our mind. This technique is a great solution to treat the patients with conditions such as Post Traumatic Stress Disorder (PTSD) and debilitating phobias.

In a normal method of therapy, doctors force their patients to face their fears in the hope they will learn that the thing they fear isn’t harmful after all. This traditional therapy may take some long time to cure their patients. 

But in an upcoming technique they scan patient’s brain to observe activity and then simply identify complex patterns that mimic a specific fear memory. This technique called as called “Decoded Neurofeedback”.

For their experiment, neuroscientists selected 17 healthy volunteers rather than patients with phobias. For volunteers, researchers have created a mild “fear memory” by giving electrical shock when they saw a certain computer image. Then they started to get fear for certain images, exhibiting symptoms such as sweating, faster heart rate. Once they had the pattern of this fearful memory, researchers attempted to overwrite this natural response by offering the participant a small monetary award.

Once the researcher team was able to spot that specific fear memory, they used Artificial Intelligence AI image recognition methods to quickly read and understand the memory information. This treatment has major benefits over traditional drug based treatments. Someday, doctors could simply remove the fear of height or spiders from people’s memory – the process is going to be very easy and normal in future days. 

Dr. Ben Seymour, University of Cambridge’s Engineering Department said, 

"The way information is represented in the brain is very complicated, but the use of Artificial Intelligence (AI) image recognition methods now allow us to identify aspects of the content of that information. When we induced a mild fear memory in the brain, we were able to develop a fast and accurate method of reading it by using AI Algorithms. The challenge then was to find a way to reduce or remove the fear memory, without ever consciously evoking it."

November 23, 2016 No comments

Google’s A.I. Experiments Quick Draw, Giorgio Cam etc. helps you to play with Artificial Intelligence and Machine learning.

Google is always doing some innovative experiments for its users. For example “Chrome Experiments” is a page where we can see thousands of innovative web apps. They are surprising their users with their new ideas.

ai experiments, Google ai experiments, quick draw, Giorgio Cam, A.I. duet, A.I. Experiments, machine learning, artificial intelligence, experiments ai 

As we all know, Companies are using Artificial Intelligence for their new ideas. Google widely using machine learning technology in their products to better serve its users. For example, if you search for cats in Google Photos it shows all the pictures of cats only. There are lot of animals in this world.  But search results show cats only. How? It’s because of machine learning. It knows what the animals looks like by analyzing thousands of animal pictures and recognizing patterns between them.
 
The machine learning technology is very complex to understand. But Google took some extra steps to make of its machine learning technology more accessible to people who are interested in artificial intelligence. Now it’s very easy to play with machine learning. Now you can explore machine learning by playing with pictures, language, music, code and more.
 
Google introduced a new website called A.I. Experiments. This website contains eight web tools to play. I tried it and I definitely believe that you will love it.
 
Quick Draw is one of the projects in A.I. Experiments. It asks you to draw simple objects like sun, fan, bicycle or anything you want and the computer will automatically guesses what you are drawing. It identifies the right answer in a very quick amount of time. It guesses the answers by collecting the experiences from other people’s doodles.
 


Giorgio Cam uses your Smartphone camera to correctly identify the objects. If you place certain objects in front of your laptop or Smartphone camera Giorgio Cam recognizes the objects and turns them into lyrics to a song. A robot voice sings the word over a Giorgio Moroder beat, resulting in some peculiar music.
 
Google Translate Tech is to translate objects you point at into different languages.
 
All other experiments are also very impressive. Check them out, and get experiences that can technology can do.
November 22, 2016 2 comments
Older Posts

About me

About Me


Aenean sollicitudin, lorem quis bibendum auctor, nisi elit consequat ipsum, nec sagittis sem nibh id elit. Duis sed odio sit amet nibh vulputate.

Follow Us

Labels

AI News AI Technology Artificial Intelligence Course Big data analytics Data Science Google AI Robots Statistics

recent posts

Blog Archive

  • ▼  2017 (12)
    • ▼  December (1)
      • It's A "Holly Jolly" Artificial Intelligence Enabl...
    • ►  May (7)
    • ►  April (1)
    • ►  February (3)
  • ►  2016 (18)
    • ►  December (2)
    • ►  November (15)
    • ►  October (1)

Follow Us

  • Facebook
  • Google Plus
  • Twitter
  • Pinterest

Report Abuse

About Me

Koti
View my complete profile
FOLLOW ME @INSTAGRAM

Created with by ThemeXpose