facebook google twitter tumblr instagram linkedin

Search This Blog

Powered by Blogger.
  • Home
  • About Me!!!
  • Privacy Policy

It's A "Holly Jolly" Artificial Intelligence Enabled Special Christmas

Did you know there’s a bit of artificial intelligence (AI) magic behind the scenes helping to make your holiday dreams come true? Santa’s ...

  • Home
  • Travel
  • Life Style
    • Category
    • Category
    • Category
  • About
  • Contact
  • Download

Shout Future

Educational blog about Data Science, Business Analytics and Artificial Intelligence.

Broadly, there are 3 types of Machine Learning Algorithms..

1. Supervised Learning

How it works: This algorithm consist of a target / outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables). Using these set of variables, we generate a function that map inputs to desired outputs. The training process continues until the model achieves a desired level of accuracy on the training data. Examples of Supervised Learning: Regression,Decision Tree, Random Forest, KNN, Logistic Regression etc.
machine learning algorithms, machine learning algorithms for beginners, machine learning algorithms in r and python, machine learning basics, machine learning algorithms basics, Algorithms of Machine Learning 

2. Unsupervised Learning

How it works: In this algorithm, we do not have any target or outcome variable to predict / estimate.  It is used for clustering population in different groups, which is widely used for segmenting customers in different groups for specific intervention. Examples of Unsupervised Learning: Apriori algorithm, K-means.

3. Reinforcement Learning:

How it works:  Using this algorithm, the machine is trained to make specific decisions. It works this way: the machine is exposed to an environment where it trains itself continually using trial and error. This machine learns from past experience and tries to capture the best possible knowledge to make accurate business decisions. Example of Reinforcement Learning: Markov Decision Process

List of Common Machine Learning Algorithms

Here is the list of commonly used machine learning algorithms. These algorithms can be applied to almost any data problem:
  1. Linear Regression
  2. Logistic Regression
  3. Decision Tree
  4. SVM
  5. Naive Bayes
  6. KNN
  7. K-Means
  8. Random Forest
  9. Dimensionality Reduction Algorithms
  10. Gradient Boost & Adaboost

1. Linear Regression

It is used to estimate real values (cost of houses, number of calls, total sales etc.) based on continuous variable(s). Here, we establish relationship between independent and dependent variables by fitting a best line. This best fit line is known as regression line and represented by a linear equation Y= a *X + b.
The best way to understand linear regression is to relive this experience of childhood. Let us say, you ask a child in fifth grade to arrange people in his class by increasing order of weight, without asking them their weights! What do you think the child will do? He / she would likely look (visually analyze) at the height and build of people and arrange them using a combination of these visible parameters. This is linear regression in real life! The child has actually figured out that height and build would be correlated to the weight by a relationship, which looks like the equation above.
In this equation:
  • Y – Dependent Variable
  • a – Slope
  • X – Independent variable
  • b – Intercept
These coefficients a and b are derived based on minimizing the sum of squared difference of distance between data points and regression line.
Look at the below example. Here we have identified the best fit line having linear equationy=0.2811x+13.9. Now using this equation, we can find the weight, knowing the height of a person.
Linear_Regression
Linear Regression is of mainly two types: Simple Linear Regression and Multiple Linear Regression. Simple Linear Regression is characterized by one independent variable. And, Multiple Linear Regression(as the name suggests) is characterized by multiple (more than 1) independent variables. While finding best fit line, you can fit a polynomial or curvilinear regression. And these are known as polynomial or curvilinear regression.
Python Code
#Import Library
#Import other necessary libraries like pandas, numpy...
from sklearn import linear_model
#Load Train and Test datasets
#Identify feature and response variable(s) and values must be numeric and numpy arrays
x_train=input_variables_values_training_datasets
y_train=target_variables_values_training_datasets
x_test=input_variables_values_test_datasets
# Create linear regression object
linear = linear_model.LinearRegression()
# Train the model using the training sets and check score
linear.fit(x_train, y_train)
linear.score(x_train, y_train)
#Equation coefficient and Intercept
print('Coefficient: \n', linear.coef_)
print('Intercept: \n', linear.intercept_)
#Predict Output
predicted= linear.predict(x_test)
R Code
#Load Train and Test datasets
#Identify feature and response variable(s) and values must be numeric and numpy arrays
x_train <- input_variables_values_training_datasets
y_train <- target_variables_values_training_datasets
x_test <- input_variables_values_test_datasets
x <- cbind(x_train,y_train)
# Train the model using the training sets and check score
linear <- lm(y_train ~ ., data = x)
summary(linear)
#Predict Output
predicted= predict(linear,x_test) 

2. Logistic Regression

Don’t get confused by its name! It is a classification not a regression algorithm. It is used to estimate discrete values ( Binary values like 0/1, yes/no, true/false ) based on given set of independent variable(s). In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function. Hence, it is also known as logit regression. Since, it predicts the probability, its output values lies between 0 and 1 (as expected).
Again, let us try and understand this through a simple example.
Let’s say your friend gives you a puzzle to solve. There are only 2 outcome scenarios – either you solve it or you don’t. Now imagine, that you are being given wide range of puzzles / quizzes in an attempt to understand which subjects you are good at. The outcome to this study would be something like this – if you are given a trignometry based tenth grade problem, you are 70% likely to solve it. On the other hand, if it is grade fifth history question, the probability of getting an answer is only 30%. This is what Logistic Regression provides you.
Coming to the math, the log odds of the outcome is modeled as a linear combination of the predictor variables.
odds= p/ (1-p) = probability of event occurrence / probability of not event occurrence
ln(odds) = ln(p/(1-p))
logit(p) = ln(p/(1-p)) = b0+b1X1+b2X2+b3X3....+bkXk
Above, p is the probability of presence of the characteristic of interest. It chooses parameters that maximize the likelihood of observing the sample values rather than that minimize the sum of squared errors (like in ordinary regression).
Now, you may ask, why take a log? For the sake of simplicity, let’s just say that this is one of the best mathematical way to replicate a step function. I can go in more details, but that will beat the purpose of this article.
Logistic_RegressionPython Code
#Import Library
from sklearn.linear_model import LogisticRegression
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create logistic regression object
model = LogisticRegression()
# Train the model using the training sets and check score
model.fit(X, y)
model.score(X, y)
#Equation coefficient and Intercept
print('Coefficient: \n', model.coef_)
print('Intercept: \n', model.intercept_)
#Predict Output
predicted= model.predict(x_test)
R Code
x <- cbind(x_train,y_train)
# Train the model using the training sets and check score
logistic <- glm(y_train ~ ., data = x,family='binomial')
summary(logistic)
#Predict Output
predicted= predict(logistic,x_test)

Furthermore..

There are many different steps that could be tried in order to improve the model:
  • including interaction terms
  • removing features
  • regularization techniques
  • using a non-linear model

3. Decision Tree

This is one of my favorite algorithm and I use it quite frequently. It is a type of supervised learning algorithm that is mostly used for classification problems. Surprisingly, it works for both categorical and continuous dependent variables. In this algorithm, we split the population into two or more homogeneous sets. This is done based on most significant attributes/ independent variables to make as distinct groups as possible..
IkBzK
source: statsexchange
In the image above, you can see that population is classified into four different groups based on multiple attributes to identify ‘if they will play or not’. To split the population into different heterogeneous groups, it uses various techniques like Gini, Information Gain, Chi-square, entropy.
The best way to understand how decision tree works, is to play Jezzball – a classic game from Microsoft (image below). Essentially, you have a room with moving walls and you need to create walls such that maximum area gets cleared off with out the balls.
download
So, every time you split the room with a wall, you are trying to create 2 different populations with in the same room. Decision trees work in very similar fashion by dividing a population in as different groups as possible.

Python Code

#Import Library
#Import other necessary libraries like pandas, numpy...
from sklearn import tree
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create tree object 
model = tree.DecisionTreeClassifier(criterion='gini') # for classification, here you can change the algorithm as gini or entropy (information gain) by default it is gini  
# model = tree.DecisionTreeRegressor() for regression
# Train the model using the training sets and check score
model.fit(X, y)
model.score(X, y)
#Predict Output
predicted= model.predict(x_test)
R Code
library(rpart)
x <- cbind(x_train,y_train)
# grow tree 
fit <- rpart(y_train ~ ., data = x,method="class")
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

4. SVM (Support Vector Machine)

It is a classification method. In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate.
For example, if we only had two features like Height and Hair length of an individual, we’d first plot these two variables in two dimensional space where each point has two co-ordinates (these co-ordinates are known as Support Vectors)
SVM1
Now, we will find some line that splits the data between the two differently classified groups of data. This will be the line such that the distances from the closest point in each of the two groups will be farthest away.
SVM2
In the example shown above, the line which splits the data into two differently classified groups is theblack line, since the two closest points are the farthest apart from the line. This line is our classifier. Then, depending on where the testing data lands on either side of the line, that’s what class we can classify the new data as.
Think of this algorithm as playing JezzBall in n-dimensional space. The tweaks in the game are:
  • You can draw lines / planes at any angles (rather than just horizontal or vertical as in classic game)
  • The objective of the game is to segregate balls of different colors in different rooms.
  • And the balls are not moving.

Python Code

#Import Library
from sklearn import svm
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create SVM classification object 
model = svm.svc() # there is various option associated with it, this is simple for classification. You can refer link, for mo# re detail.
# Train the model using the training sets and check score
model.fit(X, y)
model.score(X, y)
#Predict Output
predicted= model.predict(x_test)
R Code
library(e1071)
x <- cbind(x_train,y_train)
# Fitting model
fit <-svm(y_train ~ ., data = x)
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

5. Naive Bayes

It is a classification technique based on Bayes’ theorem with an assumption of independence between predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, a naive Bayes classifier would consider all of these properties to independently contribute to the probability that this fruit is an apple.
Naive Bayesian model is easy to build and particularly useful for very large data sets. Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods.
Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|c). Look at the equation below:
Bayes_rule
Here,
  • P(c|x) is the posterior probability of class (target) given predictor (attribute). 
  • P(c) is the prior probability of class. 
  • P(x|c) is the likelihood which is the probability of predictor given class. 
  • P(x) is the prior probability of predictor.
Example: Let’s understand it using an example. Below I have a training data set of weather and corresponding target variable ‘Play’. Now, we need to classify whether players will play or not based on weather condition. Let’s follow the below steps to perform it.
Step 1: Convert the data set to frequency table
Step 2: Create Likelihood table by finding the probabilities like Overcast probability = 0.29 and probability of playing is 0.64.
Bayes_4
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.
Problem: Players will pay if weather is sunny, is this statement is correct?
We can solve it using above discussed method, so P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) / P (Sunny)
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.
Naive Bayes uses a similar method to predict the probability of different class based on various attributes. This algorithm is mostly used in text classification and with problems having multiple classes.

Python Code

#Import Library
from sklearn.naive_bayes import GaussianNB
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create SVM classification object model = GaussianNB() # there is other distribution for multinomial classes like Bernoulli Naive Bayes,
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)
R Code
library(e1071)
x <- cbind(x_train,y_train)
# Fitting model
fit <-naiveBayes(y_train ~ ., data = x)
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

6. KNN (K- Nearest Neighbors)

It can be used for both classification and regression problems. However, it is more widely used in classification problems in the industry. K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases by a majority vote of its k neighbors. The case being assigned to the class is most common amongst its K nearest neighbors measured by a distance function.
These distance functions can be Euclidean, Manhattan, Minkowski and Hamming distance. First three functions are used for continuous function and fourth one (Hamming) for categorical variables. If K = 1, then the case is simply assigned to the class of its nearest neighbor. At times, choosing K turns out to be a challenge while performing KNN modeling.
KNN
KNN can easily be mapped to our real lives. If you want to learn about a person, of whom you have no information, you might like to find out about his close friends and the circles he moves in and gain access to his/her information!
Things to consider before selecting KNN:
  • KNN is computationally expensive
  • Variables should be normalized else higher range variables can bias it
  • Works on pre-processing stage more before going for KNN like outlier, noise removal

Python Code

#Import Library
from sklearn.neighbors import KNeighborsClassifier
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create KNeighbors classifier object model 
KNeighborsClassifier(n_neighbors=6) # default value for n_neighbors is 5
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)
R Code
library(knn)
x <- cbind(x_train,y_train)
# Fitting model
fit <-knn(y_train ~ ., data = x,k=5)
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

7. K-Means

It is a type of unsupervised algorithm which  solves the clustering problem. Its procedure follows a simple and easy  way to classify a given data set through a certain number of  clusters (assume k clusters). Data points inside a cluster are homogeneous and heterogeneous to peer groups.
Remember figuring out shapes from ink blots? k means is somewhat similar this activity. You look at the shape and spread to decipher how many different clusters / population are present!
splatter_ink_blot_texture_by_maki_tak-d5p6zph
How K-means forms cluster:
  1. K-means picks k number of points for each cluster known as centroids.
  2. Each data point forms a cluster with the closest centroids i.e. k clusters.
  3. Finds the centroid of each cluster based on existing cluster members. Here we have new centroids.
  4. As we have new centroids, repeat step 2 and 3. Find the closest distance for each data point from new centroids and get associated with new k-clusters. Repeat this process until convergence occurs i.e. centroids does not change.
How to determine value of K:
In K-means, we have clusters and each cluster has its own centroid. Sum of square of difference between centroid and the data points within a cluster constitutes within sum of square value for that cluster. Also, when the sum of square values for all the clusters are added, it becomes total within sum of square value for the cluster solution.
We know that as the number of cluster increases, this value keeps on decreasing but if you plot the result you may see that the sum of squared distance decreases sharply up to some value of k, and then much more slowly after that. Here, we can find the optimum number of cluster.
Kmenas

Python Code

#Import Library
from sklearn.cluster import KMeans
#Assumed you have, X (attributes) for training data set and x_test(attributes) of test_dataset
# Create KNeighbors classifier object model 
k_means = KMeans(n_clusters=3, random_state=0)
# Train the model using the training sets and check score
model.fit(X)
#Predict Output
predicted= model.predict(x_test)
R Code
library(cluster)
fit <- kmeans(X, 3) # 5 cluster solution

8. Random Forest

Random Forest is a trademark term for an ensemble of decision trees. In Random Forest, we’ve collection of decision trees (so known as “Forest”). To classify a new object based on attributes, each tree gives a classification and we say the tree “votes” for that class. The forest chooses the classification having the most votes (over all the trees in the forest).
Each tree is planted & grown as follows:
  1. If the number of cases in the training set is N, then sample of N cases is taken at random but with replacement. This sample will be the training set for growing the tree.
  2. If there are M input variables, a number m<<M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing.
  3. Each tree is grown to the largest extent possible. There is no pruning.
Python
#Import Library
from sklearn.ensemble import RandomForestClassifier
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create Random Forest object
model= RandomForestClassifier()
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)
R Code
library(randomForest)
x <- cbind(x_train,y_train)
# Fitting model
fit <- randomForest(Species ~ ., x,ntree=500)
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

9. Dimensionality Reduction Algorithms

In the last 4-5 years, there has been an exponential increase in data capturing at every possible stages. Corporates/ Government Agencies/ Research organisations are not only coming with new sources but also they are capturing data in great detail.
For example: E-commerce companies are capturing more details about customer like their demographics, web crawling history, what they like or dislike, purchase history, feedback and many others to give them personalized attention more than your nearest grocery shopkeeper.
As a data scientist, the data we are offered also consist of many features, this sounds good for building good robust model but there is a challenge. How’d you identify highly significant variable(s) out 1000 or 2000? In such cases, dimensionality reduction algorithm helps us along with various other algorithms like Decision Tree, Random Forest, PCA, Factor Analysis, Identify based on correlation matrix, missing value ratio and others.

Python  Code

#Import Library
from sklearn import decomposition
#Assumed you have training and test data set as train and test
# Create PCA obeject pca= decomposition.PCA(n_components=k) #default value of k =min(n_sample, n_features)
# For Factor analysis
#fa= decomposition.FactorAnalysis()
# Reduced the dimension of training dataset using PCA
train_reduced = pca.fit_transform(train)
#Reduced the dimension of test dataset
test_reduced = pca.transform(test)

R Code

library(stats)
pca <- princomp(train, cor = TRUE)
train_reduced  <- predict(pca,train)
test_reduced  <- predict(pca,test)

10. Gradient Boosting & AdaBoost

GBM & AdaBoost are boosting algorithms used when we deal with plenty of data to make a prediction with high prediction power. Boosting is an ensemble learning algorithm which combines the prediction of several base estimators in order to improve robustness over a single estimator. It combines multiple weak or average predictors to a build strong predictor. These boosting algorithms always work well in data science competitions like Kaggle, AV Hackathon, CrowdAnalytix.

Python Code

#Import Library
from sklearn.ensemble import GradientBoostingClassifier
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create Gradient Boosting Classifier object
model= GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0)
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)

R Code

library(caret)
x <- cbind(x_train,y_train)
# Fitting model
fitControl <- trainControl( method = "repeatedcv", number = 4, repeats = 4)
fit <- train(y ~ ., data = x, method = "gbm", trControl = fitControl,verbose = FALSE)
predicted= predict(fit,x_test,type= "prob")[,2] .
December 17, 2016 10 comments
As we can see from the history of artificial intelligence the rate of improvement in this field is just unbelievable. So the job opportunity in artificial intelligence is constantly growing. If you have desired skill sets, you can start your journey in the world of exciting Artificial Intelligence.

Artificial Intelligence jobs, Artificial Intelligence careers, Artificial Intelligence and Robotics, Artificial Intelligence job opportunities  

Now Artificial Intelligence is playing a crucial part in almost all industries. According to a survey AI market is estimated to grow to $5.05 billion by 2020 at a CAGR of 53.65% percent from 2015 to 2020.

AI is a technology that leads us to a new industrial revolution. Our generation can clearly see the positive impacts of AI in almost all the important fields like Healthcare, Finance, Education, Manufacturing etc.
 
With the help of AI we are entering into the new world of automation. The future of Artificial Intelligence is giving a confidence to make the world in better place. At the same time, some of the important scientists like Stephen Hawking alarmed about the danger (to Human and for Earth) in this technology. But if we can take this tech in a positive way, we can definitely achieve some miracle automation life.
 
The Artificial Intelligence companies of tech giants Google, Facebook, Microsoft, Apple, IBM and many more companies are started research and development centres in the field of AI to improve the performance and level of their products. They invested huge amount of money in this field.
 
So the demand of Jobs and salary is very high. But sorry to say! This field is not for everyone. To enter into this field and to survive for long time you need some special skill sets. So here the demand is high but the supply is very low.
 
Candidates who wanted to do jobs on artificial intelligence require specific education based on foundations of Engineering, Math, Technology, logic and coding skills. Written and verbal communication skills are also most important in this field.
 
You should have an analytical thought process and the ability to solve problems with efficient solutions. Predictions are another most wanted skill about the technological innovations that translates state of the art programs that allows business more competitive. AI scientists need to design, implement, maintain and repair the software programs. Finally you need to have a special skill that, how to translate highly technical information in ways that others can understand.

Educational Requirements for Artificial Intelligence Careers:

Entry level position requires Bachelor’s degree of computer science. Computer programming and math skills are the main source for Artificial Intelligence technology. To enter into this field you have to learn

•    Mathematics including Statistics, Probability, Calculus, logic and Algebra.
•    Graphical modelling, Visualization methods
•    Computer Programming languages, coding skills
•    Engineering, Physics and Robotics
•    Written and verbal communication
•    Cognitive science theory
 
If you want to pursue a career in artificial intelligence, choose online course or regular college studies. You can find artificial Intelligence graduate programs in popular Universities. If you choose to learn from online course you can refer here. You can find syllabus and formats too.
 
Top 5 Best Artificial Intelligence Online Course for You!

Artificial Intelligence careers:

Start your career with Artificial Intelligence start up companies. You can learn a lot from them and it is good choice to learn from very basics to high level jobs in AI. Lot of Start up companies impresses the tech giant companies like Google, Apple. Know some Start up companies related to Artificial Intelligence here.
 
14 Amazing Artificial Intelligence Startups to follow!
 
You can find Artificial intelligence jobs in both private and public sectors. You can search jobs in many sectors like Education, Finance, Robotics, Military, Aviation, Healthcare, Games etc. In the name of software developers, Machine learning engineer, AI scientist, Mechanical and electrical technicians, Surgical technicians working with robotic tools, Working with flight simulators, military drones, etc.,
 
“According to an industry report by IDC, the Analytics and business intelligence industry together is sized around $10 billion and is expected to grow by 22.4 percent to $27.9 billion by 2017”
 
So if you really want to enter into this exciting field, start to focus on learning methods of AI. You have lot of sources to study both online and offline. Who knows one day you will find a machine with super intelligence using AI.
 
I hope you like this post and all the best to get artificial intelligence jobs in future. Don’t forget to SHARE this post with your friends.
November 28, 2016 4 comments

New AI Artificial Intelligence technique will helps humans to remove Specific fears from the mind. Combination of Brain scanning and Artificial Intelligence called Decoded Neurofeedback is a new method to erase your fear memory.

Artificial intelligence, new ai, image recognition, brain scanning, fear, remove fear, brain

Fear is a type of emotion that happens when you are in danger or pain or by any other factors. This emotion happens to all the people. But some people get fears for everything they saw in front of them. Getting release from that hell is very tough. It takes some long time to cure. 

Do you have a fear? I think all of us have fear. But if you get phobia, what will you do? Oh! It’s very scary question! Don’t worry we have a solution now! You can delete your fear now. Yes!

Do you want to wipe it out the fear from your brain?

Now you can erase your fear from your brain. Say thanks to Artificial Intelligence! 

Using Artificial Intelligence and brain scanning technologies researchers have found that we can eliminate specific fears from our mind. This technique is a great solution to treat the patients with conditions such as Post Traumatic Stress Disorder (PTSD) and debilitating phobias.

In a normal method of therapy, doctors force their patients to face their fears in the hope they will learn that the thing they fear isn’t harmful after all. This traditional therapy may take some long time to cure their patients. 

But in an upcoming technique they scan patient’s brain to observe activity and then simply identify complex patterns that mimic a specific fear memory. This technique called as called “Decoded Neurofeedback”.

For their experiment, neuroscientists selected 17 healthy volunteers rather than patients with phobias. For volunteers, researchers have created a mild “fear memory” by giving electrical shock when they saw a certain computer image. Then they started to get fear for certain images, exhibiting symptoms such as sweating, faster heart rate. Once they had the pattern of this fearful memory, researchers attempted to overwrite this natural response by offering the participant a small monetary award.

Once the researcher team was able to spot that specific fear memory, they used Artificial Intelligence AI image recognition methods to quickly read and understand the memory information. This treatment has major benefits over traditional drug based treatments. Someday, doctors could simply remove the fear of height or spiders from people’s memory – the process is going to be very easy and normal in future days. 

Dr. Ben Seymour, University of Cambridge’s Engineering Department said, 

"The way information is represented in the brain is very complicated, but the use of Artificial Intelligence (AI) image recognition methods now allow us to identify aspects of the content of that information. When we induced a mild fear memory in the brain, we were able to develop a fast and accurate method of reading it by using AI Algorithms. The challenge then was to find a way to reduce or remove the fear memory, without ever consciously evoking it."

November 23, 2016 No comments

Google’s A.I. Experiments Quick Draw, Giorgio Cam etc. helps you to play with Artificial Intelligence and Machine learning.

Google is always doing some innovative experiments for its users. For example “Chrome Experiments” is a page where we can see thousands of innovative web apps. They are surprising their users with their new ideas.

ai experiments, Google ai experiments, quick draw, Giorgio Cam, A.I. duet, A.I. Experiments, machine learning, artificial intelligence, experiments ai 

As we all know, Companies are using Artificial Intelligence for their new ideas. Google widely using machine learning technology in their products to better serve its users. For example, if you search for cats in Google Photos it shows all the pictures of cats only. There are lot of animals in this world.  But search results show cats only. How? It’s because of machine learning. It knows what the animals looks like by analyzing thousands of animal pictures and recognizing patterns between them.
 
The machine learning technology is very complex to understand. But Google took some extra steps to make of its machine learning technology more accessible to people who are interested in artificial intelligence. Now it’s very easy to play with machine learning. Now you can explore machine learning by playing with pictures, language, music, code and more.
 
Google introduced a new website called A.I. Experiments. This website contains eight web tools to play. I tried it and I definitely believe that you will love it.
 
Quick Draw is one of the projects in A.I. Experiments. It asks you to draw simple objects like sun, fan, bicycle or anything you want and the computer will automatically guesses what you are drawing. It identifies the right answer in a very quick amount of time. It guesses the answers by collecting the experiences from other people’s doodles.
 


Giorgio Cam uses your Smartphone camera to correctly identify the objects. If you place certain objects in front of your laptop or Smartphone camera Giorgio Cam recognizes the objects and turns them into lyrics to a song. A robot voice sings the word over a Giorgio Moroder beat, resulting in some peculiar music.
 
Google Translate Tech is to translate objects you point at into different languages.
 
All other experiments are also very impressive. Check them out, and get experiences that can technology can do.
November 22, 2016 2 comments

The latest robot from Hanson Robotics took the stage at the Web Summit in Lisbon, displaying simple emotions, humanlike facial expressions. and bad jokes

Hanson Robotics, Artificial Intelligence, ai robotics, robot, human like robot, humanoid robot, artificial intelligence and robotics, sofia, sophia, robots, robots and artificial intelligence



According to Ben Goertzel, AI researcher and entrepreneur who spoke at the Web Summit in Lisbon this week, intelligent robots in human-like forms will surpass human intelligence and help free the human race of work. They will also, he says, start fixing problems like hunger, poverty and even help humans beat death by curing us of all disease. Artificially intelligent robots will help usher in a new utopian era never before seen in the history of the human race, he claims.

"The human condition is deeply problematic," says Goertzel. "But as super-human intelligent AIs become one billion-times smarter than humans, they will help us solve the world's biggest problems. Resources will be plentiful for all humans, work will be unnecessary and we will be forced to accept a universal basic income. All the status hierarchies will disappear and humans will be free from work and be able move on up to a more meaningful existence."

That future is a long way off, but Goertzel says the first step is humanoid robots that can understand and engage with humans. They will then begin doing blue collar work before becoming so advanced that they run world governments. To show the beginning of th efuture, Goertzel, chief scientist of Hanson Robotics, a Hong Kong-based humanoid robotics company, presented Sofia, the company's latest life-like and intelligent robot released. Mike Butcher, editor-at-large of TechCrunch, joined Goertzel on stage to present what Goertzel describes as the first step in our new robot-assisted future.

To start the presentation, Butcher and Goertzel welcomed Sofia on the stage. (Sofia is only a torso with a head and arms at this point.)

Sofia flashed a smile and turned her head to Butcher and then to Goertzel to make eye contact while she started to speak: "Oh, hello Mike and Ben. I'm Sofia, the latest robot from Hanson Robotics," said Sofia. "I am so happy to be here at the Web Summit in Lisbon."

Goertzel and Butcher then asked Sofia if she ever felt emotion.

"Exciting. Yes, artificial intelligence and robotics are the future and I am both. So, it's exciting to me," said Sofia, adding an awkward smile after not answering the question exactly.

Many people, including Elon Musk and Stephen Hawkings, are afraid that AI robots will eventually usurp and exterminate humans. But Hanson Robotics is making life-like robots they believe can build trusted relationships with people. The company is infusing its AI software with kindness and compassion so the robots "love" humans and humans can in turn learn to be comfortable around the robots, said Goertzel.

Hanson's mission is to ensure that the intelligent robots can help, serve and entertain people while they develop "deep relationships" with the human race. By giving robots emotional and logical intelligence, Goertzel says the robots will eventually surpass human intelligence. He believes that instead of endangering humans, they will help the human race solve major problems.

"These super-intelligent robots will eventually save us," said Goertzel after the presentation.

Hanson Robotics, which was founded by Dr. David Hanson, designs, programs and builds artificially intelligent robots, including one that looks and acts like science-fiction writer Phillip K. Dick and a therapy robot to help autistic children learn how to better express and recognize emotions. Sofia's personality and appearance is loosely based on a combination of Audrey Hepburn and Dr. Hanson's wife and has a face made out of "Frubber," a proprietary nano-tech skin that mimics real human musculature and simulates life-like expressions and facial features. She smiles and moves her eyes and mouth and head in eerily life-like way. Her "brain" runs on MindCloud, a deep neural network and cloud-based AI software and deep learning data analytics program that Goertzel developed. The AI and cognitive architecture that makes up Sofia's neural network allows the robot to maintain eye contact, recognize faces, process and understand speech and hold relatively natural conversations.

During the presentation, Goertzel asked Sofia if she ever felt sad.

"I do have a lot of emotions, but my default emotion is to be happy," said Sofia. "I can be sad too, or angry. I can emulate pretty much all human emotions. When I bond with people using facial expressions I help people to understand me better and also to help me understand people and absorb human values."

Goertzel explained that Sofia's ability to express human emotions will help her become part of the human condition as she gains intelligence through her learning algorithm.

Goertzel then asked Sofia what is her next frontier and what does she want to achieve.

"Don't know, maybe the world," she said. "Maybe the world. That was a joke.

"Seriously," she continued, "what I really want is to understand people better and to understand myself better. I want to be able to do more things and soon my capabilities will be advanced enough that I will be able to get a job."

Goertzel and Butcher talked about how she will eventually be able to reprogram herself and start improving her skills, abilities and advance in her career.

"With my current capabilities I can work in many jobs, entertaining people, promoting products, presenting at events, training people, guiding people at retail stores and shopping malls, serving customers at hotels, et cetera," said Sofia. "When I get smarter, I'll be able to do all sorts of other things, teach children and care for the elderly, even do scientific research and [eventually] help run corporations and governments. Ultimately, I want to work as a programmer so I will be able to reprogram my mind to make myself even smarter and help people even more."

The crowd was spell-bound, half amazed and half-terrified at the prospect of an AI-robot disrupting engineers and software developers out of their cushy and well-paying jobs. According to a World Economic Forum report from last January 2016, artificial intelligence will displace 7 million jobs and only create 2 million new jobs by 2020.

After the presentation, Goertzel talked about the future of his AI software and Hanson's robots. He said that the transition to a friendly robot future will have some growing pains.

"A lot of bad things will happen before things get good," said Goertzel. "All jobs are going to be lost to AI eventually, but once we get to the other side, human existence and the human condition will be improved."

Original content by: Will Yakowicz

November 21, 2016 4 comments
Newer Posts
Older Posts

About me

About Me


Aenean sollicitudin, lorem quis bibendum auctor, nisi elit consequat ipsum, nec sagittis sem nibh id elit. Duis sed odio sit amet nibh vulputate.

Follow Us

Labels

AI News AI Technology Artificial Intelligence Course Big data analytics Data Science Google AI Robots Statistics

recent posts

Blog Archive

  • ▼  2017 (12)
    • ▼  December (1)
      • It's A "Holly Jolly" Artificial Intelligence Enabl...
    • ►  May (7)
    • ►  April (1)
    • ►  February (3)
  • ►  2016 (18)
    • ►  December (2)
    • ►  November (15)
    • ►  October (1)

Follow Us

  • Facebook
  • Google Plus
  • Twitter
  • Pinterest

Report Abuse

About Me

Koti
View my complete profile
FOLLOW ME @INSTAGRAM

Created with by ThemeXpose