Automatic answer evaluation machine

Published on . Written by

Automatic answer evaluation machine
Evaluation of subjective or text based answers has been a hurdle in the development of eLearning systems since a very long time. The problem related to evaluation of subjective answers is that each student has his/her own way of answering and it is difficult to determine the degree of correctness.

Read more..
It involves evaluation of grammar and knowledge concepts using the conceived interpretation and creativity of a human mind. The person operating the machine should have the subject knowledge on which the paper is to be evaluated.


Skyfi Labs Projects
Our Proposed System uses machine learning and NLP to solve this problem.

Typically, there are two types of question answering systems:

Closed-domain question answering

  1. This deals with questions under a specific domain.
  2. It is harder on the other side since information is not generally available in the public domain.
Open-domain question answering

  1. This deals with questions about nearly everything.
  2. These rely only on general ontology and world knowledge
There exist two methods for querying the answer for user questions

  • AI method
  1. This method requires complex and advanced linguistic analysis programs.
  2. This method focuses on answer generation by analyzing questions and creating an understanding of the question
  • FAQ search method
  1. Artificial intelligence approach: This method uses an ontology-based knowledge base technique.
  2. Statistical techniques: This method considers the similarities in work, sentence length and word order.
The methodologies used in this project are machine learning and NLP

Procedure

  1. The software will take a copy of the answer as an input and then after the pre-processing step, it will extract the test of the answer.
  2. This text will again go through processing to build a model of keywords and feature sets. Model answer sets and keywords categorized as mentioned will be the input as well. The classifier will then, based on the training will give marks to the answers.
  3. Marks to the answer will be the final output. The answers to all the questions after the extraction would be stored in a database. It brings much transparency to the present method of answer checking.
The only disadvantage of using computer as evaluation tool is the possibility of failure at the time of submission of answers and lack of technical knowledge provided to the teachers and students

There are three performance evaluation based question answering systems

  • Manual evaluations
  • Semi-automatic evaluations
  • Fully automatic evaluations
Objectives

  1. It is to create the data repository of questions and answers as knowledge base.
  2. To design a method to read answers and extract keywords or features from the answers
  3. To develop a method/grammar to construct a meaningful statement from generated feature.
  4. To develop an approach to test the intermediate answers generated with knowledge base.
  5. It is to allocate marks depending on comparison or testing.
Software Requirements:

  • Py-Charm IDE Community Edition 2018.1.4 x64
  • MySQL v5.7.7 or higher
  • XAMPP v3.2.2
  • Python 2.4 or higher
Hardware Requirements:

  • Microsoft Windows 10/8/7/Vista/2003/XP.
  • RAM: 1gigabyte (GB) (32-bit) or 2 GB (64-bit).
  • Processor: 1 gigahertz (GHz) or faster.
  • Hard Disk: 5 GB hard disk space+ at least 1GB for caches.
  • Display Resolution: 1024*768 resolution is Minimum
Conclusion

Considering all approaches and downfalls in each approach, the different parameters extracted and proposed an approach to evaluate short answers for descriptive subjects automatically

Kit required to develop Automatic answer evaluation machine:
Technologies you will learn by working on Automatic answer evaluation machine:


Any Questions?


Subscribe for more project ideas