facebook google twitter tumblr instagram linkedin

Search This Blog

Powered by Blogger.
  • Home
  • About Me!!!
  • Privacy Policy

It's A "Holly Jolly" Artificial Intelligence Enabled Special Christmas

Did you know there’s a bit of artificial intelligence (AI) magic behind the scenes helping to make your holiday dreams come true? Santa’s ...

  • Home
  • Travel
  • Life Style
    • Category
    • Category
    • Category
  • About
  • Contact
  • Download

Shout Future

Educational blog about Data Science, Business Analytics and Artificial Intelligence.

Insurance, medical insurance, AI, Robots in Insurance,  Artificial intelligence, Personal injury claim
Zurich said it recently introduced AI claims handling and saved 40,000 work hours as a result. 

Zurich Insurance is deploying artificial intelligence in deciding personal injury claims after trials cut the processing time from an hour to just seconds, its chairman said. 
"We recently introduced AI claims handling, and saved 40,000 work hours, while speeding up the claim processing time to five seconds," Tom de Swaan told Reuters.
The insurer had started using machines in March to review paperwork, such as medical reports. 
"We absolutely plan to expand the use of this type of AI (artificial intelligence)," he said. 
Insurers are racing to hone the benefits of technological advancements such as big data and AI as tech-driven startups, like Lemonade, enter the market.
Lemonade promises renters and homeowners insurance in as little as 90 seconds and payment of claims in three minutes with the help of artificial intelligence bots that set up policies and process claims. 
De Swaan said Zurich Insurance, Europe's fifth-biggest insurer, would increasingly use machine learning, or AI, for handling claims. 
"Accuracy has improved. Because it's machine learning, every new claim leads to further development and improvements," the Dutch native said. 
Japanese insurer Fukoku Mutual Life Insurance began implementing AI in January, replacing 34 staff members in a move it said would save 140 million yen ($1.3 million) a year. 
British insurer Aviva is also currently looking at using AI. 
De Swaan said he does not fear competition from tech giants like Google-parent Alphabet or Apple entering the insurance market, although some technology companies have expressed interest in cooperating with Zurich.
"None of the technology companies so far have taken insurance risk on their balance sheet, because they don't want to be regulated," he said. 
"You need the balance sheet to be able to sell insurance and take insurance risk," he added.
May 18, 2017 1 comments
AI, Artificial intelligence, AI Artificial Intelligence, AI Lab,

China's first national laboratory for brain-like artificial intelligence (AI) technology was inaugurated Saturday to pool the country's top research talent and boost the technology. China’s rapid rise up the ranks of AI research has the world's scientific community taking notice. In October, the Obama White House released a “strategic plan” for AI research, which noted that the U.S. no longer leads the world in journal articles on “deep learning,” a particularly hot subset of AI research right now. The country that had overtaken the U.S.? China, of course.
“I have a hard time thinking of an industry we cannot transform with AI,” says Andrew Ng, chief scientist at Baidu. Ng previously cofounded Coursera and Google Brain, the company’s deep learning project. Now he directs Baidu’s AI research out of Sunnyvale, California, right in Silicon Valley.
“China has a fairly deep awareness of what’s happening in the English-speaking world, but the opposite is not true,” says Ng. He points out that Baidu has rolled out neural network-based machine translation and achieved speech recognition accuracy that surpassed humans—but when Google and Microsoft, respectively, did so, the American companies got a lot more publicity. “The velocity of work is much faster in China than in most of Silicon Valley,” says Ng Approved by the National Development and Reform Commission in January, the lab, based in China University of Science and Technology (USTC), aims to develop a brain-like computing paradigm and applications.
The university, known for its leading role in developing quantum communication technology, hosts the national lab in collaboration with a number of the country's top research bodies such as Fudan University, Shenyang Institute of Automation of the Chinese Academy of Sciences as well as Baidu, operator of China's biggest online search engine.
Wan Lijun, president of USTC and chairman of the national lab, said the ability to mimic the human brain's ability in sorting out information will help build a complete AI technology development paradigm. The lab will carry out research to guide machine learning such as recognizing messages and using visual neural networks to solve problems. It will also focus on developing new applications with technological achievements.

May 15, 2017 3 comments

Microsoft is undertaking several projects dedicated to sustainability

Microsoft has been making significant contributions in Tech for Good and has taken significant steps towards environment conservation. The company’s going green mantra is underscored by the $1.1 million in 2016, fundraising and 5,949 number of volunteering hours put in by its employees.
But it doesn’t stop there. Microsoft’s ecosystem allows the firm, its employees, and the business partners to leverage new technologies for improving sustainability of their companies and communities. The Redmond giant recently tied up with The Nature Conservancy, a nonprofit to extend support for nonprofits globally.

greening the planet

Microsoft’s commitment towards nature is deeply rooted in the technologies it utilizes. Microsoft announced a $1 billion commitment to bring cloud computing resources to nonprofit organizations around the world. The firm donates near $2 million every day in products and services to nonprofits as a part of the commitment.
Microsoft has extended its support to organizations like World Wildlife Fund, Rocky Mountain Institute,Carbon Disclosure Project, Wildlife Conservation Society, and the U.N. Framework Convention on Climate Change’s (UNFCCC) Climate Neutral Now initiative.

Here are a slew of use cases

How Prashant Gupta’s initiative is helping farmers in Andhra Pradesh increase revenue? Prashant Gupta works as a Cloud + Enterprise Principal Director at Microsoft. Gupta is undertaking significant developments for environment. Earlier, Gupta had facilitated a partnership for Microsoft with a United Nations agency, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT),  and the Andhra Pradesh government. The project involved helping ground nut farmers cope with the drought.  
Gupta and his team leveraged advanced analytics and machine learning to launch a pilot program with a Personalized Village Advisory Dashboard for 4,000 farmers in 106 villages in Andhra Pradesh. It also included a Sowing App with 175farmers in one district.
Based on weather conditions, soil, and other indicators; the Sowing App advises farmers on the best time to sow. The Personalized Village Advisory Dashboard provides insights about soil health, fertilizer recommendations, and seven-day weather forecasts.

Nature Conservancy’s Coastal Resilience program








Microsoft’s Azure cloud platform for Nature Conservancy’s Coastal Resilience program: The Coastal Resilience is a public-private partnership led by The Nature Conservancy to help coastal communities address the devastating effects of climate change and natural disasters. The program has trained and helped over 100 communities globally about the uses and applications of the Microsoft’s Natural Solutions Toolkit.
The toolkit contains a suite of geospatial tools and web apps for climate adaptation and resilience planning across land and sea environments. This has helped in strategizing for risk reduction, restoration, and resilience to safeguard local habitats, communities, and economies.
Puget Sound: Puget Sound’s lowland river valleys is a treasure house, delivering valuable assets, a wealth of natural, agricultural, industrial, recreational, and health benefits to the four million people who live in the region. However, the communities are at increasing risk of flooding issues from rising sea levels, more extreme coastal storms, and more frequent river flooding.

High winds hit Puget Sound












The Conservancy’s Washington chapter is building a mapping tool as part of the Coastal Resilience toolkit to reduce the flow of polluted storm water into Puget Sound. Emily Howe, an aquatic ecologist is in charge of the project, which revolves around developing the new Storm water Infrastructure mapping tool. This tool will be eventually integrated into the Puget Sound Coastal Resilience tool set, that will be hosted on Azure.
Furthermore, it will include a high-level heat map of storm water pollution for the region, combining an overlay of pollution data with human and ecological data for prioritizing areas of concern.
Data helps in Watershed Management: Today, around 1.7 billion people living in the world’s largest cities depend on water flowing from watersheds. However, estimates suggest that those sources of watershed will be tapped by up to two-thirds of the global population, by 2050.
Kari Vigerstol, The Nature Conservancy’s Global Water Funds Director of Conservation had overseen development of a tool to provide them with better data. The project entailed assisting cities and protecting their local water sources. 4,000 cities were analyzed by “Beyond the Source”. The results stated that natural solutions can improve water quality for four out of five cities.
Furthermore, The Natural Solutions Toolkit is being leveraged globally to better understand and protect water resources around the world. Through the water security toolkit, cities will be furnished with a more powerful set of tools. Users can also explore data, and access proven solutions and funding models utilizing the beta version of Protecting Water Atlas. This tool will help in improving water quality and supply for the future.

Microsoft is illuminating these places with its innovative array of big data and analytics offerings


Emily Howe







  1. In Finland, Microsoft partnered with CGI to develop a smarter transit system for the city of Helsinki. This data-driven initiative saw Microsoft utilize the city’s existing warehouse systems to create a cloud-based solution that could collate and analyse travel data. Helsinki’s bus team noticed a significant reduction in fuel costs and consumption, besides realizing increased travel safety, and improved driver performance.
  2. Microsoft Research Lab Asia designed a mapping tool, called Urban Air for the markets in China. The tool allows users to see, and even predict, air quality levels across 72 cities in China. The tool furnishes real-time, detailed air quality information, making use of  big data and machine learning. Additionally, the tool combines a mobile app, which is used about three million times per day.
  3. Microsoft is implementing environmental strategies worldwide. The firm is assisting the city of Chicago in designing new ways to gather data. Additionally, the firm is also helping the city utilize predictive analytics in order to better address water, infrastructure, energy, and transportation challenges.
  4. Boston serves as another great instance where Microsoft is working to spread information about the variety of urban farming programs in Boston. Microsoft is also counting on the potential of AI and other technology to increase the impact for the city.
  5. Microsoft has also partnered with Athena Intelligence for developing the hill city of San Francisco. As a part of this partnership, Microsoft is leveraging Athena’s data processing and visualization platform to gather valuable data about land, food, water, and energy. This will help in improving local decision-making.

Outlook


Satya Nadella, CEO of Microsoft











Data is not all that matters. At the end, it’s essentially about how cities can be empowered to take action based on that data. Microsoft has comprehensively supported the expansion of The Nature Conservancy’s innovative Natural Solutions Toolkit. The solution suite is already powering on-the-ground and in-the-water projects around the world, besides benefiting coastal communities, residents of the Puget Sound, and others globally.
Microsoft is doing an excellent job, delivering on its promise to empower people and organizations globally to thrive in a resource-constrained world. The organization is empowering researchers, scientists and policy specialists at nonprofits by providing them with technology that addresses sustainability.
May 11, 2017 4 comments

TAPPING A NEURAL NETWORK TO TRANSLATE TEXT IN CHUNKS

Facebook.com, facebook AI,  artificial intelligence, language translation
Facebook.com artificial intelligence 

Facebook's research arm is coming up with better ways to translate text using AI.
Facebook
Facebook’s billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if you’re an English speaker confronted with German, or a French speaker seeing Spanish, you’ll see a link that says “See Translation.”
But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.
The scientists who developed the new system work at the social network’s FAIR group, which stands for Facebook A.I. Research.
“Neural networks are modeled after the human brain,” says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.
But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.
“It doesn’t go left to right,” Auli says, of their translator. “[It can] look at the data all at the same time.” For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.
Graham Neubig, an assistant professor at Carnegie Mellon University’s Language Technologies Institute, researches natural language processing and machine translation. He says that this isn’t the first time this kind of neural network has been used to translate text, but that this seems to be the best he’s ever seen it executed with a convolutional neural network.
“What this Facebook paper has basically showed— it’s revisiting convolutional neural networks, but this time they’ve actually made it really work very well,” he says.
Facebook isn’t yet saying how it plans to integrate the new technology with its consumer-facing product yet; that’s more the purview of a department there call the applied machine learning group. But in the meantime, they’ve released the tech publicly as open-source, so other coders can benefit from it
That’s a point that pleases Neubig. “If it’s fast and accurate,” he says, “it’ll be a great additional contribution to the field.”
May 09, 2017 No comments
How to learn Machine learning? Learning machine learning is not a big deal but if you want to be an expert in any field you need someone as a mentor. So try to follow some professional machine learning blogs like Shout Future, Kdnuggets, Analytics Vidhya etc., and if you have any doubts, clarify it through forums or directly ask in comments.

Learning machine learning, how to learn machine learning, teach yourself machine learning, machine learning
Machine Learning 

I realised the growth and development of machine learning in future will be incredible, so started to learn Machine learning back in 2013. I started from scratch and I confused a lot. Because I don't know what to do and where to start.
I think you are also like me! Here I mentioned step by step learning process to become a professional machine learning engineer yourself.

1.Getting Started:

  • Find out what is machine learning.
  • Skills to become machine learning engineer.
  • Attend conferences and workshops.
  • Interact with experienced people directly or through social media.

2. Learn Basics of Mathematics and statistics:

  • Start to learn Descriptive and Inferential statistics by Udacity course.
  • Linear algebra course by Khan Academy and MITopenCourseware
  • Learn Multivariate Calculus by Calculus One
  • Learn Probability by edX course.

3. Choose your tool: Learn R or Python:

Learn R:
  • R is very easy to learn compare than python. 
  • Interactive intro to R programming language by Data Camp. 
  • Exploratory data analysis by Coursera. 
  • Start to follow R-Bloggers. 
Learn Python:
  • Start your programming with Google's Python Class.
  • Intro to data analysis by Udacity.

4. Basic and Advanced machine learning tools:

  • Machine Learning Course by Coursera.
  • Machine Learning classification by Coursera.
  • Intro to Machine Learning by Udacity.
  • Blogs and Guides like Shout Future,  machine Learning mastery,  etc.
  • Algorithms: Design and Analysis 1 
  • Algorithms: Design and Analysis 2

5. Build your Profile:

  • Start your Github profile. 
  • Start to practice in Kaggle competitions.
That's it. With these skills you can enter into the sexiest job in the world now called "Data Scientist". Plan well and follow this steps very well. 
You have to travel very long to become an expert in this field. So start your journey from today onwards and separate yourself from the crowd. 
Please Comment your ideas and opinions. 


May 09, 2017 13 comments
Insurance fraud, insurance, data analytics, big data, insurance fraud cases

While there is no doubt that the insurance segment is witnessing an unprecedented annual growth, insurers continue to struggle with loss-leading portfolios and lower insurance penetration among consumers. Insurers are facing increasing pressure to strike the right balance, while ensuring adherence to underwriting and claims decisions in the face of regulatory pressures, growth of digital channels and increasing competition. Adding to this is the need to secure the good risks, while weeding out the bad risks. 
Insurers are turning their attention towards big data and analytics solutions to help check fraud, recognize misrepresentation and prevent identity theft. With the government’s recent push to adopt digitization, the Aadhaar card plays a crucial role, linking income tax permanent account numbers (PANs), banks, credit bureaus, telecoms and utilities and providing a unified and centralized data registry that profiles an individual’s economic behaviour. The e-commerce boom provides additional data on financial behaviour. 

 Fraudulent practices 

Claims fraud is a threat to the viability of the health insurance business. Although health insurers regularly crack down on unscrupulous healthcare providers, fraudsters continually exploit any new loopholes with forged documents purporting to be from leading hospitals. 
 Medical ID theft is one of the most common techniques adopted by fraudsters. Due to this, claim funds are paid into their bank accounts, through identity theft. The insurer’s procedures allows for the policyholder to send a scanned image of his/her cheque, with the bank account details for ID purposes, which is then manipulated by the fraudsters. 
Besides forged documents, other common sources of fraud come from healthcare providers themselves, with cases of ‘upgrading’ (billing for more expensive treatments than those provided), ‘phantom billing’ and ‘ganging’ (billing for services provided to family members or other individuals accompanying the patient, but not delivered). 
 Health insurers have to take action before an insurance claim is paid and to put an end to the ‘pay-and-chase’ approach. Using data to validate a pre-payment would be far more useful than having to ‘chase’ for a payment. This approach, however, rests on real-time access to information sources. 

 Life insurance’s woes 

India’s life insurers suffer from low persistency rates that see more than one in three policies lapse by the end of the second year. This may be attributed to mis-selling, misrepresentation of material facts, premeditated fabrication and in other cases suppression of facts. 
Life insurers have been facing fraud that is largely data driven and can be curbed with effective use of data analytics. While seeking customer information, insurers should perform checks against public record databases to ensure they have insights into the validity of personal information. This can be achieved through data mining and validation from various sources. For instance, in the US, frauds are committed through stolen social security numbers or driver’s license numbers, or those of deceased individuals. Data accessed from various sources will help identify if the person in question is using multiple identities or multiple people are using the identity presented. 
 The use of public, private and proprietary databases to obtain information not typically found in an individual’s wallet to create knowledge-based authentication questions which are designed to be answered only by the correct individual can also help reduce fraud significantly. 
 Continuous evaluation of existing customers is also critical for early fraud detection. For example, one red flag for potential fraud can involve beneficiary or address changes for new customers. Insurers should verify address changes, as many consumers do not know their identity has been stolen until after it has happened. By applying relationship analytics, insurers can obtain insights into the relationship between the insured, the owner, and the beneficiary, to help determine whether those individuals are linked to other suspicious entities or are displaying suspicious behaviour patterns. 

 Solutions for all 

Like in most developed insurance markets, it is imperative that data on policies, claims and customers be made available on a shared platform, in real-time. Such a platform can allow for real-time enquiries on customers. It can also facilitate screening of the originator of every proposal. Insurers would contribute policy, claims and distributors’ information to the repository on a regular basis. Such data repositories can provide insights to help insurers detect patterns, identify nexus and track mis-selling. 
 Insurance data is dynamic and hence data analytics cannot depend only on past behaviour patterns. So data has to be updated regularly. Predictive analysis can play a significant role in identifying distributor nexus, mis-selling and repeated misrepresentations. Relationship analytics could be used to identify linked sellers and suspected churn among them. 
 These data platform-based solutions are not just about preventing reputational risk and loss of business, but with controlled and more informed risk selection, there could be a positive impact on pricing of products. The whole process of underwriting new business with greater granularity of risk and greater transparency can bring in new customers, but it could also out-price some others. There can be increased scrutiny of agents, brokers and distributors to eliminate any suspects from the system. 
 Successful fraud prevention strategies include shifting towards a proactive approach that detects fraud prior to policy issuance, and leveraging red flags or business rules, real-time identity checks, relationship analytics, and predictive models. Insurers who leverage both internal data and external data analytics will better understand fraud risks throughout their customer life cycles, and will be more prepared to detect and mitigate those risks.
May 09, 2017 2 comments
Table of Contents:

Introduction
Nature of Data
              1. Time series data.
              2. Spatial data
              3. Spacio-temporal data.
Categories of data
           1.Primary data 
            1. Direct personal interviews.
            2. Indirect Oral interviews.
            3. Information from correspondents.
            4. Mailed questionnaire method.
            5. Schedules sent through enumerators.
2. Secondary data
1. Published sources
                2. Unpublished sources.


Data gathering techniques, data collection, data collection and analysis,  gathering data,  data gathering,
Data gathering techniques 

Introduction:
Everybody collects, interprets and uses information, much of it in numerical or statistical forms in day-to-day life. It is a common practice that people receive large quantities of information everyday through conversations, televisions, computers, the radios, newspapers, posters, notices and instructions. It is just because there is so much information available that people need to be able to absorb, select and reject it.

 In everyday life, in business and industry, certain statistical information is necessary and it is independent to know where to find it how to collect it. As consequences, everybody has to compare prices and quality before making any decision about what goods to buy. As employees of any firm, people want to compare their salaries and working conditions, promotion opportunities and so on. In time the firms on their part want to control costs and expand their profits.

One of the main functions of statistics is to provide information which will help on making decisions. Statistics provides the type of information by providing a description of the present, a profile of the past and an estimate of the future.

The following are some of the objectives of collecting statistical information.
1. To describe the methods of collecting primary statistical information.
2. To consider the status involved in carrying out a survey.
3. To analyse the process involved in observation and interpreting.
4. To define and describe sampling.
5. To analyse the basis of sampling.
6. To describe a variety of sampling methods.

Statistical investigation is a comprehensive and requires systematic collection of data about some group of people or objects, describing and organizing the data, analyzing the data with the help of different statistical method, summarizing the analysis and using these results for making judgements, decisions and predictions.
The validity and accuracy of final judgement is most crucial and depends heavily on how well the data was collected in the first place. The quality of data will greatly affect the conditions and hence at most importance must be given to this process and every possible precaution should be taken to ensure accuracy while collecting the data.

Nature of data:
It may be noted that different types of data can be collected for different purposes. The data can be collected in connection with time or geographical location or in connection with time and location.
The following are the three types of data:
1. Time series data.
2. Spatial data
3. Spacio-temporal data.

Time series data Analysis:
It is a collection of a set of numerical values, collected over a period of time. The data might have been collected either at regular intervals of time or irregular intervals of time.
Spatial Data:
If the data collected is connected with that of a place, then it is termed as spatial data. For example, the data may be
1. Number of runs scored by a batsman in different test matches in a test series at different places.
2. District wise rainfall in a state.
3. Prices of silver in four metropolitan cities.
Spacio Temporal Data:
If the data collected is connected to the time as well as place then it is known as spacio temporal data.

Categories of data:
Any statistical data can be classified under two categories depending upon the sources utilized. These categories are,
1. Primary data
2. Secondary data

Primary data:
Primary data is the one, which is collected by the investigator himself for the purpose of a specific inquiry or study. Such data is original in character and is generated by survey conducted by individuals or research institution or any organisation.
For example, if a researcher is interested to know the impact of noon meal scheme for the school children, he has to undertake a survey and collect data on the opinion of parents and children by asking relevant questions. Such a data collected for the purpose is called primary data.

The primary data can be collected by the following five methods.
1. Direct personal interviews.
2. Indirect Oral interviews.
3. Information from correspondents.
4. Mailed questionnaire method.
5. Schedules sent through enumerators.

1. Direct personal interviews:
The persons from whom information’s are collected are known as informants. The investigator personally meets them and asks questions to gather the necessary information’s. It is the suitable method for intensive rather than extensive field surveys. It suits best for intensive study of the limited field.

Merits:
1. People willingly supply informations because they are approached personally. Hence, more response noticed in this method than in any other method.
2. The collected informations are likely to be uniform and accurate. The investigator is there to clear the doubts of the informants.
3. Supplementary informations on informant’s personal aspects can be noted. Informations on character and environment may help later to interpret some of the results.
4. Answers for questions about which the informant is likely to be sensitive can be gathered by this method.
5. The wordings in one or more questions can be altered to suit any informant. Explanations may be given in other languages also. Inconvenience and misinterpretations are thereby avoided.

Limitations:
1. It is very costly and time consuming.
2. It is very difficult, when the number of persons to be interviewed is large and the persons are spread over a wide area.
3. Personal prejudice and bias are greater under this method.

2. Indirect Oral Interviews:
Under this method the investigator contacts witnesses or neighbours or friends or some other third parties who are capable of supplying the necessary information. This method is preferred if the required information is on addiction or cause of fire or theft or murder etc., If a fire has broken out a certain place, the persons living in neighbourhood and witnesses are likely to give information on the cause of fire.
In some cases, police interrogated third parties who are supposed to have knowledge of a theft or a murder and get some clues. Enquiry committees appointed by governments generally adopt this method and get people’s views and all possible details of facts relating to the enquiry. This method is suitable whenever direct sources do not exist or cannot be relied upon or would be unwilling to part with the information.
The validity of the results depends upon a few factors, such as the nature of the person whose evidence is being recorded, the ability of the interviewer to draw out information from the third parties by means of appropriate questions and cross examinations, and the number of persons interviewed. For the success of this method one person or one group alone should not be relied upon.

3. Information from correspondents:
The investigator appoints local agents or correspondents in different places and compiles the information sent by them. Informations to Newspapers and some departments of Government come by this method. The advantage of this method is that it is cheap and appropriate for extensive investigations. But it may not ensure accurate results because the correspondents are likely to be negligent, prejudiced and biased. This method is adopted in those cases where informations are to be collected periodically from a wide area for a long time.

4. Mailed questionnaire method:
Under this method a list of questions is prepared and is sent to all the informants by post. The list of questions is technically called questionnaire. A covering letter accompanying the questionnaire explains the purpose of the investigation and the importance of correct informations and requests the informants to fill in the blank spaces provided and to return the form within a specified time. This method is appropriate in those cases where the informants are literates and are spread over a wide area.

Merits:
1. It is relatively cheap.
2. It is preferable when the informants are spread over the wide area.

Limitations:
1. The greatest limitation is that the informants should be literates who are able to understand and reply the questions.
2. It is possible that some of the persons who receive the questionnaires do not return them.
3. It is difficult to verify the correctness of the informations furnished by the respondents.
With the view of minimizing non-respondents and collecting correct information, the questionnaire should be carefully drafted. There is no hard and fast rule. But the following general principles may be helpful in framing the questionnaire. A covering letter and a self addressed and stamped envelope should accompany the questionnaire.
The covering letter should politely point out the purpose of the survey and privilege of the respondent who is one among the few associated with the investigation. It should assure that the informations would be kept confidential and would never be misused. It may promise a copy of the findings or free gifts or concessions etc.,

Characteristics of a good questionnaire:
1. Number of questions should be minimum.
2. Questions should be in logical orders, moving from easy to more difficult questions.
3. Questions should be short and simple. Technical terms and vague expressions capable of different interpretations should be avoided.
4. Questions fetching YES or NO answers are preferable. There may be some multiple choice questions requiring lengthy answers are to be avoided.
5. Personal questions and questions which require memory power and calculations should also be avoided.
6. Question should enable cross check. Deliberate or unconscious mistakes can be detected to an extent.
7. Questions should be carefully framed so as to cover the entire scope of the survey.
8. The wording of the questions should be proper without hurting the feelings or arousing resentment.
9. As far as possible confidential informations should not be sought.
10. Physical appearance should be attractive, sufficient space should be provided for answering each question.

5. Schedules sent through Enumerators:
Under this method enumerators or interviewers take the schedules, meet the informants and filling their replies. Often distinction is made between the schedule and a questionnaire. A schedule is filled by the interviewers in a face-to-face situation with the informant. A questionnaire is filled by the informant which he receives and returns by post. It is suitable for extensive surveys.

Merits:
1. It can be adopted even if the informants are illiterates.
2. Answers for questions of personal and pecuniary nature can be collected.
3. Non-response is minimum as enumerators go personally and contact the informants.
4. The informations collected are reliable. The enumerators can be properly trained for the same.
5. It is most popular methods.

Limitations:
1. It is the costliest method.
2. 2. Extensive training is to be given to the enumerators for collecting correct and uniform informations.
3. Interviewing requires experience. Unskilled investigators are likely to fail in their work.

Before the actual survey, a pilot survey is conducted. The questionnaire/Schedule is pre-tested in a pilot survey. A few among the people from whom actual information is needed are asked to reply. If they misunderstand a question or find it difficult to answer or do not like its wordings etc., it is to be altered. Further it is to be ensured that every questions fetches the desired answer.

Merits and Demerits of primary data:
1. The collection of data by the method of personal survey is possible only if the area covered by the investigator is small. Collection of data by sending the enumerator is bound to be expensive. Care should be taken twice that the enumerator record correct information provided by the informants.
2. Collection of primary data by framing a schedules or distributing and collecting questionnaires by post is less expensive and can be completed in shorter time.
3. Suppose the questions are embarrassing or of complicated nature or the questions probe into personnel affairs of individuals, then the schedules may not be filled with accurate and correct information and hence this method is unsuitable.
4. The information collected for primary data is mere reliable than those collected from the secondary data.

Secondary Data:
Secondary data are those data which have been already collected and analysed by some earlier agency for its own use; and later the same data are used by a different agency.

According to W.A.Neiswanger, ‘A primary source is a publication in which the data are published by the same authority which gathered and analysed them. A secondary source is a publication, reporting the data which have been gathered by other authorities and for which others are responsible’.

Sources of Secondary data:
In most of the studies the investigator finds it impracticable to collect first-hand information on all related issues and as such he makes use of the data collected by others. There is a vast amount of published information from which statistical studies may be made and fresh statistics are constantly in a state of production. The sources of secondary data can broadly be classified under two heads:

1. Published sources, and
2. Unpublished sources.

1. Published Sources:
The various sources of published data are:
1. Reports and official publications of
(i) International bodies such as the International Monetary Fund, International Finance Corporation and United Nations Organisation.
(ii) Central and State Governments such as the Report of the Tandon Committee and Pay Commission.
2. Semi-official publication of various local bodies such as Municipal Corporations and District Boards.
3. Private publications-such as the publications of –
(i) Trade and professional bodies such as the Federation of Indian Chambers of Commerce and Institute of Chartered Accountants.
(ii) Financial and economic journals such as ‘Commerce’ , ‘Capital’ and ‘ Indian Finance’ .
(iii) Annual reports of joint stock companies.
(iv) Publications brought out by research agencies, research scholars, etc.

It should be noted that the publications mentioned above vary with regard to the periodically of publication. Some are published at regular intervals (yearly, monthly, weekly etc.,) whereas others are ad hoc publications, i.e., with no regularity about periodicity of publications.

Note: A lot of secondary data is available in the internet. We can access it at any time for the further studies.

2. Unpublished Sources
All statistical material is not always published. There are various sources of unpublished data such as records maintained by various Government and private offices, studies made by research institutions, scholars, etc. Such sources can also be used where necessary

Precautions in the use of Secondary data
The following are some of the points that are to be considered in the use of secondary data
1. How the data has been collected and processed
2. The accuracy of the data
3. How far the data has been summarized
4. How comparable the data is with other tabulations
5. How to interpret the data, especially when figures collected for one purpose is used for another
Generally speaking, with secondary data, people have to compromise between what they want and what they are able to find.

Merits and Demerits of Secondary Data:
1. Secondary data is cheap to obtain. Many government publications are relatively cheap and libraries stock quantities of secondary data produced by the government, by companies and other organisations.
2. Large quantities of secondary data can be got through internet.
3. Much of the secondary data available has been collected for many years and therefore it can be used to plot trends.
4. Secondary data is of value to: - The government – help in making decisions and planning future policy.
- Business and industry – in areas such as marketing, and sales in order to appreciate the general economic and social conditions and to provide information on competitors.
- Research organisations – by providing social, economical and industrial information.

May 06, 2017 No comments
Newer Posts
Older Posts

About me

About Me


Aenean sollicitudin, lorem quis bibendum auctor, nisi elit consequat ipsum, nec sagittis sem nibh id elit. Duis sed odio sit amet nibh vulputate.

Follow Us

Labels

AI News AI Technology Artificial Intelligence Course Big data analytics Data Science Google AI Robots Statistics

recent posts

Blog Archive

  • ▼  2017 (12)
    • ▼  December (1)
      • It's A "Holly Jolly" Artificial Intelligence Enabl...
    • ►  May (7)
    • ►  April (1)
    • ►  February (3)
  • ►  2016 (18)
    • ►  December (2)
    • ►  November (15)
    • ►  October (1)

Follow Us

  • Facebook
  • Google Plus
  • Twitter
  • Pinterest

Report Abuse

About Me

Koti
View my complete profile
FOLLOW ME @INSTAGRAM

Created with by ThemeXpose