Hausarbeiten logo
Shop
Shop
Tutorials
De En
Shop
Tutorials
  • How to find your topic
  • How to research effectively
  • How to structure an academic paper
  • How to cite correctly
  • How to format in Word
Trends
FAQ
Go to shop › Computer Science - Software

Vision Based Sign Language Identification System Using Facet Analysis

Title: Vision Based Sign Language Identification System Using Facet Analysis

Bachelor Thesis , 2013 , 68 Pages , Grade: A+

Autor:in: Faryal Amber (Author)

Computer Science - Software

Excerpt & Details   Look inside the ebook
Summary Excerpt Details

The communication gap between the deaf and hearing population is clearly noticed. To make possible the communication between the Deaf and the hearing population and to overpass the gap in access to next generation Human Computer Interfaces, automated sign language analysis is highly crucial. Conversely, an enhanced solution is to build up a conversion system that translates a sign language gestures to text or speech. Exploration and experimentation of an efficient methodology based on facet features analysis. For a recognition system that can recognize gestures from video which can be used as a translation, A methodology has been proposed that extracts candidate hand gestures from sequence of video frames and collect hand features. The system has three separate parts namely: Hand Detection, Shape Matching and Hu moments comparison. The Hand Detection section detects hand through skin detection and by finding contours. It also includes processing of video frames. The procedure of shape matching is attained by comparing the histograms. The values of Hu moments of candidate hand region is identified using contour region analysis and compared to run matches and identify the particular sign language alphabet. Experimental analysis supports the efficiency of the proposed methodology on benchmark data.

Excerpt


Table of Contents

1. INTRODUCTION

1.1. Area Preface

1.2. Problem Statement

1.3. Project Objectives

1.4. Scope

1.5. Significance of the Study

1.6. Limitations

2. LITERATURE SURVEY

3. METHODOLOGY

3.1. Data Acquisition

3.2. Input Video

3.3. Hand Detection

3.3.1. Skin Detection

3.3.2. Video Processing

3.3.3. Contour Extraction

3.4. Gesture Recognition Technique

3.4.1. Feature Collection

3.4.2. Shape Matching

3.4.3. Hu Invariant Moments Comparison

3.4.4. Recognition Results

4. SYSTEM DESIGN

4.1. Proposed System Modeling language

4.1.1. Use Case Diagram

4.1.2. Flow Charts

4.1.3. Sequence Diagram

5. IMPLEMENTATION

5.1. System Requirements

5.1.1. Software Requirements

5.1.2. Hardware Requirements

5.2. System Description

5.2.1. Load Video

5.2.2. Hand Detection

5.2.2.1. Skin Detection Steps

5.2.2.3. Contours Processing Steps:

5.2.3. Gesture Recognition

6. TESTING

6.1. Testing

6.1.1. Test Case 1

6.1.2. Test Case 2

6.1.3. Test Case 3

6.1.4. Test Case 4

6.1.5. Test Case 5

6.2. Results

7. CONCLUSION & FUTURE WORK

Research Objectives and Focus

The primary goal of this research is to develop an automated sign language recognition system that bridges the communication gap between the hearing-impaired and the hearing population through advanced image processing and computer vision techniques. The work specifically addresses the challenge of recognizing hand gestures in video sequences to translate sign language alphabets into text.

  • Automated interpretation of sign language using C#.NET and OpenCV.
  • Implementation of robust hand detection via skin color segmentation and contour analysis.
  • Utilization of 2D pair-wise geometrical histograms for effective shape matching.
  • Application of Hu Invariant Moments for accurate gesture feature extraction.
  • Integration of a classification module to identify and display recognized sign language alphabets.

Excerpt from the Book

3.3.1. Skin Detection

Hand detection is done using a skin detection algorithm. Skin color segmentation is the main step where a robust and correct skin color detection algorithm is essential. This is because the succeeding steps mainly depend on the feature of the segmented image. It is vital to select the suitable color space for the application at hand. YCbCr color space (Y luminance, cb and cr are the blue difference and red-difference chrominance components), which is an encoded nonlinear RGB signal, is used for skin modeling. Due to the computational benefit of the YCbCr color space, it has been used for skin segmentation. The procedure of skin detection is as follows:

The video frames acquired are converted to Ycbcr color space with minimum and maximum values. The data value of each frame is saved in a matrix. After applying loop to columns and rows of the matrix and conditional statement to compute values, the computed values are assigned to a new frame. Dilation and erosion is applied to this frame using specified Structuring Element. The resultant frame is returned for further processing and removing of unwanted and noisy areas.

Summary of Chapters

1. INTRODUCTION: Discusses the motivation behind sign language recognition, the communication barrier faced by the deaf community, and outlines the project's scope and limitations.

2. LITERATURE SURVEY: Reviews existing methodologies for gesture recognition, including Neural Networks, Hidden Markov Models, and various image preprocessing techniques.

3. METHODOLOGY: Details the algorithmic approach used, covering data acquisition, video preprocessing, hand detection via skin segmentation, and the gesture recognition process.

4. SYSTEM DESIGN: Presents the architectural design using UML diagrams, including Use Cases, flow charts for each module, and a sequence diagram of the entire system.

5. IMPLEMENTATION: Describes the development environment, software/hardware requirements, and the specific code implementation steps for the GUI and detection algorithms.

6. TESTING: Evaluates the system through various test cases and provides an analysis of the recognition and hand detection accuracy.

7. CONCLUSION & FUTURE WORK: Summarizes the achievements of the thesis and suggests future improvements, such as incorporating Artificial Neural Networks to enhance precision.

Keywords

Contours, Skin Detection, Shape Matching, Gesture Recognition, Hu Moments Comparison, Sign Language Identification System, Image Processing, Computer Vision, C#.NET, EmguCV, OpenCV, Feature Extraction, Hand Tracking, Video Preprocessing, Human Computer Interfaces.

Frequently Asked Questions

What is the core purpose of this research project?

The project aims to build an automated system capable of recognizing sign language gestures from video files and translating them into corresponding text to facilitate communication for the hearing-impaired.

Which primary technological domains are utilized?

The study heavily relies on computer vision and digital image processing techniques, implemented within a C#.NET framework using specialized libraries like OpenCV and EmguCV.

What is the primary research question?

The research seeks to determine how image processing and feature set analysis can be used to accurately recognize and categorize sign language alphabets from video input.

Which methodologies are employed for gesture recognition?

The system uses a combination of skin detection for hand localization, contour extraction to define hand shapes, 2D pair-wise geometrical histograms for shape matching, and Hu Invariant Moments for invariant feature comparison.

What does the main part of the document cover?

The main body focuses on the system methodology, architectural design, implementation details, and a rigorous testing phase where the system is evaluated against benchmark data.

Which keywords best characterize this work?

Key terms include Sign Language Identification, Skin Detection, Contours, Shape Matching, and Gesture Recognition.

How is the "Skin Detection" implemented specifically?

The system converts RGB video frames into the YCbCr color space to exploit its computational efficiency for segmenting skin tones, followed by morphological filters like erosion and dilation to clean the resulting binary masks.

What role do Hu Invariant Moments play in the system?

Hu Invariant Moments provide a set of features that remain stable regardless of the rotation, scale, or reflection of the hand gesture, allowing for more reliable matching between the input video and stored templates.

How did the author evaluate the system's performance?

Performance was evaluated through multiple test cases, measuring hand detection accuracy and sign recognition success across different videos and signers, resulting in an overall recognition accuracy of 80-83%.

Excerpt out of 68 pages  - scroll top

Details

Title
Vision Based Sign Language Identification System Using Facet Analysis
Grade
A+
Author
Faryal Amber (Author)
Publication Year
2013
Pages
68
Catalog Number
V276571
ISBN (eBook)
9783656697541
ISBN (Book)
9783656698029
Language
English
Tags
Contours Skin Detection Shape Matching Gesture Recognition Hu Moments Comparison Sign Language Identification System
Product Safety
GRIN Publishing GmbH
Quote paper
Faryal Amber (Author), 2013, Vision Based Sign Language Identification System Using Facet Analysis, Munich, GRIN Verlag, https://www.hausarbeiten.de/document/276571
Look inside the ebook
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
Excerpt from  68  pages
Hausarbeiten logo
  • Facebook
  • Instagram
  • TikTok
  • Shop
  • Tutorials
  • FAQ
  • Payment & Shipping
  • About us
  • Contact
  • Privacy
  • Terms
  • Imprint