Dae-Cheol Noh

Dae-Cheol Noh

AI Researcher & Engineer

Hi! I’m Dae-Cheol, Noh. and I am AI Engineer.
I like image processing and computer vision, and I have studied for a long time,
so Vision AI is my favorite among deep learning fields.
In addition to Vision AI, I’m also interested in Computer Graphics and GPU-Accelerating technologies. (Open ACC, Open MP etc.)
What I am currently interested in… »

  • Machine Learning + Deep Learning
    • Unsupervised Learning (Clustering, GAN…)
    • Semantic/Instance Segmentation
    • XAI
    • etc.
  • Computer Vision + Image Processing
  • Mathematics (Calculus + Linear Algebra)
  • Computer Graphics
  • GPU Accelerating Tech.
  • (I have experience about Android development.)
And I can do DevOps »
  • Docker, Kubernetes, Kubeflow Pipeline
  • FastAPI, Flask, Django, Nginx
  • MongoDB, PostgreSQL, Redis, MySQL, etc.
  • AWS
  • CI/CD
I prefer this development environment »
  • Linux OS
  • vim
  • Visual Studio Code
  • Macbook Pro 14’ (m1 max)
Location
Dongjak-gu, Sadang-dong, Seoul, South Korea
Email
GitHub
bolero2
LinkedIn
Dae-Cheol Noh

Experience

(Intern) AI Engineer at CyberLogitec

I worked as an intern at Cyberlogitec and used deep learning to detect gastric/colon cancer from endoscopic images. I used 5,800 gastric cancer endoscopic images and 3,000 colon cancer endoscopic images (ENDO, dicom format) as a dataset, and used Detection networks such as YOLOv5, DetectoRS, and EfficientDet, and Classification networks such as ResNet and EfficientNet. As a result of the experiment, I was able to successfully complete the project by obtaining a Recall of 0.80.

Highlights

  • 1. Detection of gastric/colon cancer from endoscopic images

AI Researcher/Engineer at NeuralWorks Lab

We have developed Neural Studio, which makes it easy for everyone to learn AI models. Neural Studio builds not only Vision AI such as Classification, Object Detection, and Segmentation, but also machine learning tasks such as Linear Regression, Clustering, and Time-Series, so that no matter what dataset comes in, learning, evaluation, and deployment can be built in one flow.

Highlights

  • Development of an ML/DL Training Framework: Designed a job-based routine for the entire ML/DL training process.
  • Implementation of an Inference Service: Created a Kubernetes-based model serving and inference routine using Python, PyTorch, and TensorFlow-Keras.
  • Model Refactoring: Refactored ML/DL models (scikit-learn, PyTorch, TensorFlow-Keras) to fit the platform's requirements.
  • Proficiency in Docker and AWS Cloud Computing: Built Docker containers for training and utilized AWS cloud computing resources.
  • Deployment and Inference Server Development: Deployed trained models for inference, developed end-to-end inference routines using Python, and managed deployment servers (services and pods) via Kubernetes.

AI Researcher & Engineer at GOPIZZA

While working at GOPIZZA, I developed an AI Smart Topping Table using Instance Segmentation. To do this, I built the entire process of collecting, refining, and processing datasets, and built and introduced the Auto Annotation function to efficiently proceed with the Annotation process. In addition, I built the infrastructure of the AI ​​Smart Topping Table using technologies such as Redis and Nginx.

present

AI Engineer at Nota AI

While working at Nota AI, I developed and deployed AI models to be used in the ITS (Intelligent Transportation System). For AI model training, I efficiently managed data using AWS Lambda, Gluejob, Athena query, etc., and took a DataOps approach, such as improving the labeling guideline document to pay attention to data quality. In addition, I built an efficient MLOps system by building Kubernetes and Kubeflow Pipeline entirely on-premise, and while working on various projects, I gained various experiences such as efficient evaluation methods, using streamlit, building and improving CVAT, and EC2 deployment.

Education

Bachelor in Computer Engineering from SeoKyeong University with GPA of 3.52

Awards

[KSC2018] 학부생/주니어논문경진대회 학부생부문 장려상 from 한국정보과학회 회장

Publications

이중흐름 3차원 합성곱 신경망 구조를 이용한 효율적인 손 제스처 인식 방법 by 한국차세대컴퓨팅학회

Recently, there has been active studies on hand gesture recognition to increase immersion and provide user-friendly interaction in a virtual reality environment. However, most studies require specialized sensors or equipment, or show low recognition rates. This paper proposes a hand gesture recognition method using Deep Learning technology without separate sensors or equipment other than camera to recognize static and dynamic hand gestures. First, a series of hand gesture input images are converted into high-frequency images, then each of the hand gestures RGB images and their high-frequency images is learned through the DenseNet three-dimensional Convolutional Neural Network. Experimental results on 6 static hand gestures and 9 dynamic hand gestures showed an average of 92.6% recognition rate and increased 4.6% compared to previous DenseNet. The 3D defense game was implemented to verify the results of our study, and an average speed of 30 ms of gesture recognition was found to be available as a real-time user interface for virtual reality applications.

실시간 손 제스처 인식을 위한 덴스넷 기반 이중흐름 3차원 합성곱 신경망 구조 by 한국정보과학회

가상현실이 의료, 교육, 군사 훈련 등 다방면에서 사용됨에 따라 가상환경에서 자유로운 상호작용을 제공하기 위한 손 제스처 인식에 대한 연구가 활발히 진행되고 있다. 그러나 대부분은 별도의 센서를 요구하거나, 낮은 적중률을 보이고 있다. 본 논문은 정적 손 제스처 인식과 동적 손 제스처 인식을 위해 일반적인 USB 카메라 이외의 별도의 센서나 장 비 없이 딥러닝 기술을 사용한 손 제스처 인식 방법을 제안한다. 입력된 손 제스처를 고주파 영상들로 변환한 다음, 각 손 제스처 영상과 고주파 영상에 대해 각각 덴스넷 기반의 이중 흐름 합성곱 신경망을 수행한 다음 융합된 정보로 보다 정확한 손 제스처를 인식한다. 그리고 실시간 인터페이스 검증을 위해 가상현실 기반 3D 디펜스 게임을 개발하 여 실험한 결과, 6개의 정적 손 제스처와 9개의 동적 손 제스처 인터페이스에 대해 기존의 단일 흐름의 덴스넷에 비 해 4.58%의 성능이 향상된 평균 92.6%의 인식률을 보였다. 본 연구의 결과는 마우스나 키보드 없이 다양한 가상현실 응용 분야에서 입력 인터페이스로 활용될 수 있다.

Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지 by 한국컴퓨터그래픽스학회

With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user’s hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

실시간 손끝 탐지를 위한 VGGNet 기반 객체 탐지 네트워크 by 한국차세대컴퓨팅학회

Recently, research is being actively carried out to provide a user-friendly interface in virtual reality and augmented reality applications by applying rapidly developed deep learning technology. This paper proposes a deep learning-based fingertip detection method that detects real-time fingertips in order to provide the interface using the user’s hand. This method introduces the DenseNet Connectivity to the VGG-19 network without the required annotation preprocessing process in the existing object detection network, reducing the total number of parameters and the time required, and detecting the fingertips using the Atrous Convolution and the Grad-CAM. As a result of experimenting with this method in various environments, it was found that real-time processing of 34.4 ms is possible with an average recognition rate of 5% higher than the existing method (SSD network). As a result of this study, the application for real-time air-writing using the user’s fingertips was developed, showing the usability of the user interface.

Languages

English
Fluency: 3/5

Skills

ML/DL
Level: Master
Keywords:
  • Tensorflow
  • Keras
  • PyTorch
  • MXNet
  • Scikit-Learn
Programming Language
Level: Master
Keywords:
  • C
  • C++
  • C#
  • Java
  • Python
  • Assembly Script
Tools
Level: Master
Keywords:
  • Unity
  • Android Studio