There's a lot of potential here, especially since you're already building custom firmware/controller logic and have deep experience with local AI. The torque sensor gives you a real-time force signal at ~100–200 Hz — that's a rich data stream most e-bike brands barely use beyond simple proportional assist. adventuro
Run a lightweight model (TinyML / TFLite on the controller MCU, or on a companion ESP32) that learns your pedaling signature over time: tamobykesport
- Builds a personal torque-cadence-speed profile across different terrains and conditions.
- Automatically adjusts the assist curve without you touching PAS levels — it just knows you want more on climbs and less on flats.
- Could run entirely offline, fits Omera's local-first philosophy. adventuro
Fuse torque sensor data with GPS + altimeter to anticipate what's coming: tamobykesport
- Pre-load motor torque 1–2 seconds before a hill hits, so assist feels seamless.
- Learn your commute routes and pre-cache optimal power curves per segment.
- Optimize battery spend: "You have 14 km left, 2 hills — here's the power budget."
This is the novel one. The torque signal contains subtle patterns that shift as you fatigue: hezzobike
- Pedal stroke asymmetry — left vs. right leg force imbalance increases when tired.
- Torque variability — consistent riders get jerky when fatigued.
- Cadence drift — natural RPM drops without the rider noticing.
- An on-device model could detect fatigue onset and silently increase assist to keep you safe, or nudge you to take a break.
Log torque + cadence + speed at high resolution and pipe it to a companion app: smartybikes
- Power output estimation (torque × cadence = watts) without needing a power meter.
- Stroke-by-stroke analysis like what cyclists pay €500+ for with Garmin/SRM.
- "Ghost mode" — ride against your own previous best on a route.
The torque signal is surprisingly biometric: hezzobike
- Everyone has a unique pedal stroke signature (force curve shape, L/R balance, cadence preference).
- Train a small classifier on your stroke pattern.
- If someone else pedals the bike → motor doesn't engage. No keys, no app, just pedal and it recognizes you.
| Feature | Complexity | Hardware needed |
|---|---|---|
| Adaptive assist curve | Medium | Torque + cadence + existing controller |
| Predictive terrain | Medium-high | + GPS module (ESP32 + u-blox) |
| Fatigue detection | Medium | Torque signal only, needs ML model |
| Power analytics | Low | Just logging + app |
| Rider authentication | Low-medium | Torque signal + small NN on MCU |
The power analytics + adaptive assist combo is probably the best bang-for-buck for v1 — useful immediately, differentiating vs. every other e-bike, and you can ship it with just the torque sensor you already have plus a small local model running on the controller. The rider-auth feature is a fun flex that fits Omera's anti-theft story too. tamobykesport
Want me to spec out the data pipeline for any of these (sensor → MCU → model → motor control)?
Here's a breakdown of where each feature from your Omera torque sensor AI spec can run — directly on the controller MCU/ESP32 ("on chip") vs. offloaded to an iPhone companion app.
On-Chip (MCU / ESP32 — TinyML)
These features need real-time, low-latency inference directly in the motor control loop and are well suited to TinyML frameworks like TFLite Micro running on your existing controller or a companion ESP32:[ppl-ai-file-upload.s3.amazonaws][arxiv]
Adaptive Rider Profile — The core assist-curve adjustment must happen on-chip because it directly modulates motor output in real time. A small quantized model (INT8) learning your torque-cadence-speed profile fits comfortably in the ~50–256 KB RAM of an ESP32-S3. This is your most critical on-chip workload.[blog.meetneura][ppl-ai-file-upload.s3.amazonaws]
Predictive Terrain Assist (motor pre-loading) — The actual "ramp motor torque 1–2 seconds before the hill" command must execute on-chip with sub-10ms latency. The MCU reads torque + GPS/altimeter and applies a pre-cached power curve.[embedded][ppl-ai-file-upload.s3.amazonaws]
Rider Authentication — A small neural-network classifier (~few KB) matching pedal stroke signatures runs at inference time on the MCU. If the stroke doesn't match, the motor simply doesn't engage — no phone needed, no network needed.[arxiv][ppl-ai-file-upload.s3.amazonaws]
Fatigue Detection (inference) — The real-time detection of pedal asymmetry, torque variability, and cadence drift from the 100–200 Hz signal needs to happen on-chip so the controller can silently increase assist immediately.[ppl-ai-file-upload.s3.amazonaws]
On iPhone (Core ML / Companion App)
These features involve heavier computation, richer UI, or data that benefits from the phone's Neural Engine and storage:codecentric+1
Model Training & Updates — While inference runs on-chip, training the adaptive profile and fatigue models is too heavy for an MCU. The iPhone can retrain/fine-tune models on accumulated ride data using Core ML, then push updated weight files back to the ESP32 over BLE.developer.apple+1
Predictive Terrain — Route Learning & Battery Budgeting — The "learn your commute routes and pre-cache optimal power curves" and "you have 14 km left, 2 hills — here's the power budget" logic involves GPS route history, map data, and optimization that fits naturally on the phone. The iPhone computes the plan, then sends segment-by-segment power targets to the MCU.[ppl-ai-file-upload.s3.amazonaws]
Riding Analytics / Training Mode — High-resolution logging of torque × cadence = watts, stroke-by-stroke analysis, and "ghost mode" comparisons all require storage, visualization, and a UI — that's purely a companion app feature.[ppl-ai-file-upload.s3.amazonaws]
Fatigue Detection (pattern analysis & alerts) — While the MCU detects fatigue in real time, deeper trend analysis ("you've been fatiguing earlier this week") and nudge-to-rest notifications live in the app.[ppl-ai-file-upload.s3.amazonaws]
Split Summary
The clean split is: the MCU owns anything that touches the motor control loop in real time, while the iPhone owns training, planning, visualization, and anything that benefits from a screen or persistent storage. This also aligns well with Omera's local-first philosophy — the bike works fully offline on-chip, and the phone is an optional enhancement layer.arxiv+1[ppl-ai-file-upload.s3.amazonaws]
On-Chip (MCU / ESP32 — TinyML)
These features need real-time, low-latency inference directly in the motor control loop and are well suited to TinyML frameworks like TFLite Micro running on your existing controller or a companion ESP32: [ppl-ai-file-upload.s3.amazonaws](https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/attachments/139085844/bb55be50-c413-4e9d-952f-cac7b42f6ae3/ai-assist-with-torque-sensor.pdf?AWSAccessKeyId=ASIA2F3EMEYEZQL3QLAD&Signature=mv48M1ufPkE1GWsDeg%2FSh36zYnE%3D&x-amz-security-token=IQoJb3JpZ2luX2VjENj%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIGPii79vXsG%2FFsCyXfHjlBz0jiO2EhJZ2%2FXmZSBVfRIyAiEAzZE5diYrGeG%2BzTAiWp4SaiNl5FSkBl3oBzI3CevZeUsq%2FAQIof%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARABGgw2OTk3NTMzMDk3MDUiDIIsau2IYdS3FNXguSrQBLb53Irq8eWrJsM22wS%2FWF3KnSGPNNEz3V%2FL6Wa%2FvJgWhsQrbFKMHwzLPdYvdqT0GJaFBwcgoyXN0GbmCCafll20HWYBoVdY9hVsnFeH7X1ZQTq%2FieV8Qcx4q8sVommoYjNPrggRAdEuMvZKt31w6PuCSnR3s5wF4sIgu3xeRo4wrSMGVSI9dWTmqf1KWI6zENit7b82f5jWVu4buypleBCj5UQt3ZTOkJYNn5NIS9ZDK%2BLlJEECYEuVg5DV1h7Dswwo%2B%2FzujSEUi9mjUFlvyZHFetaoKS%2Bn%2FQgwf9L9hCJNXEEQB%2BBwP6%2B6Ft5YZ9vXe3sv9dmoj3y2aEGUqzJYEpZbwkNsxrZdB7LEmDfGgnqXMVQc3d1nRwKM7lYOun3hOofiW0wsoanaWBz24fXcL7NrjpqIgueVJHWohkujiMBRBk111VW3vMLv76rxohMicFNdL9Mwg0bg9VCH%2BFNT3Q0iZocT9pngjf3MajD5Zt2joE%2BOBZ6KJpZp9AoqXRJRHxUNqnKFO4DVhTYWT4ZnGMvC%2FnOlwDbXDgPd9F%2FPq0sy0yfcrHgCuQj5v4uxVcWzUwbWnpmAiC6c18Kh6G%2B1ww9xYT%2FSVKODJU%2FHvBd5825v7kPAZcJITz9Y0KDLimpZjibZoDWJNsd6pcGLWnoMaHS6KVpwym%2BcqVhLv3VqvSIlJpXmwjRs%2FxiGDkxkyO6F4rgXWdeS6sMjiyRPCz0AXsuqjXPIaLL72AD5VQT0qRyO2hVjEg36E7auUwDkBm6k2MMYGoXvO2Tb%2BacXZDRpmTcw6avUzQY6mAGlnVa3wCfwQv0o9z3RHvrGyyJhj6mH5yOdYoSk%2FMGpzWeSXSMyQz810dYCYH2ZCvSlYEVMeL9W%2Bwhlh2dp1AR0%2Bm3xHhynJUAWO5QQFBLFzfSpBkDWHMFmlZdztxiM4oXjIW5UYADey%2FnayCrNdOq7RZjPeewFc4Iut5%2FDJ95d1sJO2NrzPTO0ei9NUnfeG9FeV2CywrqU7A%3D%3D&Expires=1773477946)
On iPhone (Core ML / Companion App)
These features involve heavier computation, richer UI, or data that benefits from the phone's Neural Engine and storage: codecentric
Split Summary
The clean split is: the MCU owns anything that touches the motor control loop in real time, while the iPhone owns training, planning, visualization, and anything that benefits from a screen or persistent storage. This also aligns well with Omera's local-first philosophy — the bike works fully offline on-chip, and the phone is an optional enhancement layer. developer.apple