Senyas
we implemented a filipino sign language recognition model for both static and dynamic hand gestures using mediapipe.
the goal is simple: we wanted to turn all sign language gestures into text through a camera feed. in simple terms, the pipeline goes like this: sign language gesture → model prediction → text.
if you're keen on trying it out for yourself, you can check it out here. we also write some articles on the process of building it, you can check it out here.