Voice notes and composition anywhere, anytime, and from your phone.
Recently, I explained how I use Tana Capture to capture voice notes from iOS to Coda. This alchemy of tools makes it possible for me to dash off a voice note and I can manually nudge through a workflow such that my voice appears as transcribed text in a Coda document.
There are many issues with this workflow:
-
I need to push notes like this from Tana to Coda via a Tana-based Supertag and into Coda via a designated webhook. This is extremely rigid and results in more steps to utilize the notes further downstream.
-
Voice notes are often jumbles of text that need refinement. I could use Tana AI to clean them up, or even Coda AI to shape them upon arrival. But in either case, you don’t often know the nature of the incoming notes and what prompts might work best. The context exists at the capture point (the mobile edge).
I also remarked at the time:
I have a vision of Coda as the entirety of my note-talking/sense-making platform.
I’ve written a lot about note-taking and sense-making, wondering occasionally if there should be note-taking apps at all. PKM (personal knowledge management), is a big industry that never seems to hit the sweet spot for adoption. Vastly, we are always looking for the right vehicle to end the note-taking app search.
Coda could serve as your second brain, thus, ending the note-talking app madness. Connor McCormick has similar visions as expressed in his post about Session Somethings | An Executive Second Brain. I also wrote about it as far back as early 2020 here.
What if you could capture voice notes in Coda’s iOS app directly into any document and use AI to transcribe the notes into instantly useful texts?
Earlier today, Grammarly added Voice Composer to its feature set. As soon as I could reach for my phone, I tested it with Coda and not surprisingly, it works.
I have been using GrammarlyGo since beta and I’ve written about it often because it represented AI accessibility in a common framework across all your apps. I observed that it is a modern AI copilot for anyone who develops collections of words.
Every app on my iPhone and iPad includes access to Grammarly’s features including Coda’s mobile app.
Voice Notes are Ugly
I tapped the mic button and spoke these words. The content is often not recognizable a few weeks later. While the transcription was accurate, it included a number um’s and words that I typically remove in an editing process. But more specifically, it’s a big ball of utterances that are difficult to follow. Voice notes are cool, but they aren’t really usable especially by others until you clean them up.
Grammarly, transformed the text into something usable right inside my Coda document. No post-capture editing. No copying/pasting. No integration through Tana or webhooks.
In the time-tested words of the Mandalorian Creed,
“This is the way.”
But there’s more. Let your imagination run with this capability.
- In your voice notes you can describe how to classify each note. Capturing voice notes directly to a table affords you the ability to perform all sorts of Coda AI processes on Grammarly-captured notes.
- Notes can be shared to other documents based on their content and other key markers in the text which are easily teased out Using Coda AI.
- You could conceivably list tasks in the voice note that Coda AI should perform upon receipt.
Thoughts?