Advanced Computer Graphics and Digital Human Lab

Lab

Our mission

Advanced Computer Graphics and Digital Human Lab was established in April 2017 as one of four permanent labs of the Information System Science and Engineering (ISSE) course at the College of Information Science and Engineering. Our Lab contributes into the general ISSE course mission to provide English-based graduation in engineering at Ritsumeikan University for foreign and Japanese students.

Nowadays, our global society faces many challenges. Several factors — ageing population, necessity of health and safety support, comfort and ergonomics, creating good living condition during pandemic situations — require more human-centered engineering technologies. Our laboratory research deals with human-oriented modeling and development of visualization and monitoring systems using VR tools, sensory devices, experimental data, and knowledge/data bases.

We develop real-time algorithms and methods of 3D computer graphics (as such as volume rendering, 3D segmentation, mesh reconstruction and morphing), which are specifics for human-oriented modeling for healthcare and beauty services. Using motion capture data, we estimate dynamic parameters, necessary to realistically model human movements.

Digital models are used for ergonomics factors estimation, as such as grasping characteristics and reachability. We pay attention to visualization systems with haptic devices for VR surgery and VR nursing. Digital Human modeling is driven by real-time sensory and feedback devices (trackers, accelerometers, haptics), and utilizes experimentally collected data bases and Machine Learning methods.

In addition, navigation systems based on wearable devices, such as smart phones and smart glasses, are the research topics in our lab.

Below there are some research areas proposed for our Bachelor and Master students:

  • Human-centered modeling driven by real-time sensory and feedback devices (trackers, accelerometers, haptics, lidars, EEG, EMG, RGB(-D) cameras, etc. )
  • Motion capture, transfer, and biomechanical analysis for sports, healthcare, and ergonomics applications; human posture analysis; fall detection
  • Hand detection and hand tracking, hand gesture recognition and analysis
  • Machine learning for human behavior and state detection, monitoring and analysis
  • Pedestrian and cyclist detection in different lighting and traffic conditions
  • Smartphones, wearable devices and sensors for human daily activity monitoring


  • Below there are some recent thesis titles of our students:

    Human motion and posture recognition and analysis
  • Posture Corrector: Incorrect Posture and Repetition Identifier using OpenPose and Deep Neural Network
  • Investigation of Posture Similarity Metrics for Online Dance Learning Support
  • Fall detection and classification by multimodal sensory data
  • A Sport Scoring System based on Atomic Temporal Patterns and Auto-Encoder
  • A Machine Learning Approach to Support Injury Prevention during Weight Lifting Sports
  • Haptic Simulation for Fine Motor Performance Evaluation during Rest-to-Rest Circular Movements
  • Human Interaction Recognition using Body Pose Estimation

  • Sign language and air writing recognition
  • Comparison of Image-Based and Skeleton-Based Machine Learning Methods in the Task of Alphabetical Sign Language Recognition
  • Real-Time Dynamic Sign Language Recognition Using LSTM Based on MediaPipe Hand Data
  • Air Writing in Japanese: A CNN-based character recognition system using hand tracking

  • Sensory data analysis for human state identification
  • E-Worker Mental Fatigue Detection through Mindwave EEG Data and Deep Neural Networks
  • Human Daily Activity Recognition based on Energy Consumption Data from Fibion Wearable Sensor
  • Vision System for Color Vision Deficiency Correction
  • Driver Hand Recognition System using Near-Infrared Camera
  • Behavioral Biometrics based User Authentication for Online Games Using Deep Neural Networks

  • Human indoor positioning and navigation
  • Augmented Reality-based Indoor Navigation Using SLAM and Hybrid Map Information
  • Indoor Positioning using Smartphone Sensors: A LIDAR and IMU-Based Approach

  • Video/image processing and VR
  • Motherboard Component Detector with Augmented Reality Representation
  • Vision System for Color Vision Deficiency Correction
  • LiGenCam: Reconstruction of Color Camera Images from Multi-Modal LiDAR Data for Autonomous Vehicles

  • Human recognition and intention prediction in traffic environment
  • Pedestrian Detection in Different Lighting Conditions using Deep Neural Networks and Multispectral Images
  • Detection of Cyclist’s Crossing Intention based on Posture Estimation for Autonomous Driving
  • Cyclist Speed Estimation using Smartphone Sensors
  • LiDAR Data based Pedestrian Orientation Recognition with the aid of Super-resolution GAN
  • Cyclist Orientation Estimation using LiDAR Sensor
  • Skateboarder's Intention Prediction using Skeleton-based Information and LSTM
  • Pedestrian Behavior Recognition using LiDAR Data and Deep Neural Networks
  • Prediction of Vehicle-to-Pedestrian Collision based on Pedestrian Orientation Recognition and Road Segmentation




  • ISSE Course

    CISE College

    Part of Ritsumeikan University

    Last update: March 27, 2024