We’re pleased to announce that PhasaTek Labs has officially been incorporated (kabushiki kaisha). This milestone represents a meaningful transition in our evolution from an ambitious idea to a structured organization with a clear vision for the future of speech and language technology, with special focus on developing for XR/spatial platforms, expanding accessibility features and learning tools for a wide variety of practical and educational uses.
Our Origins
PhasaTek Labs is a specialized research and development firm headquartered in Tokyo, Japan. While our incorporation is recent, our founding dates back to February 2024, when the momentous spatial computer, the Apple Vision Pro was released, galvanizing us into creating intelligent, scalable, and impactful technology solutions with an granular focus on augmenting language at the pace of practical use.
In the fast moving technology ecosystem of Asia, our compact team has consistently generated industry-leading advances despite our lean structure. From our inception, we’ve been dedicated to forging a unique approach tailored for developers, students, educational institutions and businesses alike across the globe.
Introducing Mahina: The Next Generation of Speech, Language and AR Environment Tools
At the core of PhasaTek’s mission is Project Mahina, a comprehensive cross-platform suite of natural language processing and spatial computing tools designed to transform how users interact with language, information, and real world environments across multiple contexts.
Our flagship product, Mahina HUD for visionOS, delivers an immersive augmented reality experience featuring:
- Live Speech Mode with real-time speech-to-text conversion
- Dynamic realtime subtitling for enhanced accessibility
- Advanced NLP analysis tools for linguistics practitioners
- Document import capabilities for long-form text analysis
- Intuitive handwriting tools with input via PencilKit integration
- XR environment augmentation/navigation tools (in early development)
For users working on macOS and Linux, MahinaEditor will provide an NLP-focused word processor with support for the following:
- Comprehensive NaturalLanguage framework toolsets
- Native Homebrew support for extended functionality
- Local document importing, rich text formatting
- Agentic model API integration features (OpenAI, Anthropic, Google)
Mobile users on iOS will be able to access Mahina’s powerful features through:
- MLKit handwriting recognition with input stroke-based tools
- Multilingual NLP dictionaries for cross-language support
- Intuitive PencilKit input methods for natural interaction
Developers, organizations and education institutions requiring programmatic access will be able to leverage our comprehensive APIs and CLI tools for natural language processing across multiple languages:
- Custom RESTful web APIs for structural language analysis
- Command-line utilities for batch text processing and automation
- Multi-language support with specialized APIs for diverse linguistic structures
- Developer-friendly documentation for rapid integration
Across all platforms, Mahina leverages native frameworks, APIs and contemporary UI/UX principles to create a cohesive, intelligent, language-aware ecosystem that makes sophisticated NLP accessible to both everyday users and specialized developers.
Who Benefits from Our Technology?
Mahina is engineered to serve diverse users who depend on advanced language processing and accessibility tools in their professional and personal workflows. Our technology empowers several key communities:
- Research and Academic Communities benefit from comprehensive linguistic analysis tools, with NLP researchers, computational linguists, and academic institutions gaining access to sophisticated language processing capabilities for their studies and publications.
- Accessibility-Focused Users find essential support through our real-time transcription and multilingual processing features, particularly individuals with visual/hearing impairments, speech differences, or language processing needs who require reliable communication assistance.
- General Consumers experience enhanced digital interaction through intuitive language tools that make technology more accessible and responsive to natural communication patterns across multiple platforms and devices.
- Developer Ecosystem gains powerful integration capabilities, with software engineers and application developers accessing robust APIs and CLI tools to embed advanced NLP functionality into their own language-aware applications and services.
- Educational Professionals create more inclusive learning environments using our accessibility features and multilingual support, helping educators develop materials that serve diverse student populations with varying language and communication needs.
- Global Business Users streamline their cross-cultural communication workflows through comprehensive multilingual processing tools, enabling seamless collaboration that bridge language barriers across countries.
From AR-enhanced real-time subtitling to mobile handwriting tools and comprehensive desktop NLP word processors, we’re building tools that support individuals and teams who need advanced yet accessible language technologies. Our solutions empower users across education, software development, accessibility services, and everyday life to communicate, create, and analyze more effectively regardless of language or medium.
What Incorporation Means to Us
For PhasaTek Labs, incorporation represents more than a business milestone: it embodies our long-term commitment and organizational maturity. Outgrowing the ‘startup mindset’ can be a challenge, and this transition formalizes the foundation that supports our operations and provides momentum for sustainable growth.
This step forward means:
- Established governance structures enabling transparent operations and efficient management
- Enhanced ability to form strategic partnerships and secure investment
- A formalized commitment to continuous innovation and technical excellence
Our Vision Moving Forward
With establishment as our first official step now achieved, we’re focusing on executing our ambitious roadmap with greater precision and impact. PhasaTek’s core mission remains unchanged: pushing technological boundaries, empowering clients with innovative solutions, and leveraging technology for meaningful impact.
Our roadmap includes significant developments in artificial intelligence, automation, and digital infrastructure. Beyond technological advancement, we’re equally committed to building a team culture where continuous learning, inclusivity, and integrity are foundational values.
In the coming years, PhasaTek Labs aims to:
- Make substantive contributions to language technology research
- Launch innovative solutions addressing real-world communication challenges
- Foster a collaborative workplace that nurtures professional growth
- Strengthen Japan’s position in the global digital ecosystem
We view our incorporation not as an endpoint, but as a catalyst for accelerating our technological innovations and market presence.
XR for Accessibility – 3D Audio, Visual Augmentation Tools
One of the most dynamic facets of our project aims to leverage mixed reality to assist individuals with visual/hearing impairments, speech differences, or language processing needs. This application utilizes spatial computing to accurately interpret and map surroundings, transforming complex spatial data into actionable sensory information.
By exploring advanced spatial computing, we’re dreaming up thoughtfully designed solutions for tackling real world problems. We encourage the community to join us in beta testing of Mahina HUD later this year! Learn more at: testing@phasatek.jp
A Note of Gratitude
As we mark this milestone, PhasaTek Labs extends sincere appreciation to our early supporters, alpha testers, and friends who believed in our vision and contributed to our journey from initial concept to formal incorporation. Your encouragement and feedback have been invaluable, and we look forward to your continued support as we enter this exciting new chapter.
PhasaTek Labs: Visualizing Language and Speech