“There was a time, not long ago, when humans could say, my word is as good as gold and that was as good as a signed contract. The power to create with language comes from coherence, alignment, and integrity of what we think, say, and do” - Kimba Arem, Molecular Biologist and Music Therapist and DALL-E 2, Image Generator (using the same words).
In some Future(s), there is a good chance that everything you verbally say or is said about you will be copied, cleaned and ready for smart retrieval. Now, please do not be offended or scared about this concept. Your 1st Amendment rights are safe, you can still choose to have things “off the record”, remember, YOU are in charge. I believe that the spirit of development for these platforms were based in efficiency and memory enhancements. You use glasses when you cannot see well, you use hearing aids when you cannot hear well, so now if you cannot remember well, you have “this”.
Now, for the sake of this thought-letter, the recording & streaming what you see as you perform your daily or special activities will be called lifecasting.Note: Technology and culture are moving fast in the sector so I’d also like to acknowledge that this can also be called “lifelogging," "lifeblogging," "glogging" (cyborg logging), "personal casting" and "mobile blogging".
It is widely used for music festivals, conventions, concerts, sports events, change of life stages, of all kinds of variety. Over the years, cameras, computers, and wireless systems have been stitched together to create today’s wearable lens. These lens are so small, anyone can lifecast, and there are thousands of lifecasters worldwide.
There have been thousands of versions and technological leaps and evolutions of this idea since Steve first heaved a 1970’s video camera on his head. However, I have observed that all these evolved solutions continue to be heavily reliant on lensed instruments and probably due to the fact that most of our past life streams were all recorded with photo / video products, so there is a strong cultural trigger to use the cameras for our life events.
I came across a Finish company called Narrative (http://getnarrative.com) who was an early tech solution for this niche and in their post-launch research, Narrative discovered that most of the information or informative medias people really wanted to capture were audio logs of conversation vs. any visuals.
In my recent regular scans for Signals of the Future(s), I discovered a start-up lifecaster product called Rewind who promises to record anything you’ve seen, said, or heard and makes it searchable.
Quick Time Travel Experiment
Re-imagine the last few family, friends, or business events, big and small. Don’t you wish that you had a sound log of the conversations over any still pictures or waving hands in a video.
“What was the name of that streaming show my cousin told me about at the wedding?”
“Can’t believe it but this was the last time I heard Grandma’s voice”
‘So that’s what I sound like while flying down a water flume’
“Now what did the hotel receptionist say again, first two rights and then take the elevator or take the second elevator and make a first right?”
“Damn, those were some great business ideas” or ‘Wow!, that was a great conversation and could have been a podcast”
As mentioned in Power of Sound - Part II, acoustic-sensing technology removes the need for wearable video cameras and efficient since audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone (local storage) via Bluetooth in real time.
To support this potential future(s) forecast, I introduced the silent speech product, EchoSpeech. They too recognized that any unknown peripherals requiring the user to face or wear a camera is neither practical nor feasible. Also, there are major privacy concerns involving wearable cameras, for both the user and those with whom the user interacts with.
The trend of using AI to record and save meeting notes has been growing in recent years. AI-powered transcription tools, such as Otter.ai and Rev.com, use natural language processing to accurately transcribe spoken words into text in real-time. This technology has the potential to save time and improve accuracy in notetaking, as well as provide a searchable record of meetings for future reference.
In fast-growing organizations, cross-functional communications sometimes equals a calendar stuffed with back-to-back meetings. What if there was an easier way to stay in the loop? Imagine if you could turn down meetings without ever missing key insights or information.
With AI tech, you can easily and instantly share meeting takeaways across your organization removing the need for live meeting attendance.
In Evernote, every time you collect meeting notes, then, ZAP, it creates a summary, highlights team member's next steps, and outline any unanswered questions for you by sending a Slack channel message.
Understanding your customers & clients is hard enough, Product Managers shouldn’t have to also hastily take notes while meeting with them. Luckily, with the avalanche of AI meeting notes platforms, multitasking is a thing of the past! Simply press a button to get a GPT summary of your meeting insights, linked to the recording and transcript. Focus on understanding your customers, then generate a clip and tell a compelling story in the voice of your customer to get buy-in from your team on what to build next!
Have you ever had a complex meeting with multilingual participants? You need an AI meeting transcript that can match the pace of your conversation and the diversity of your team. tl;dv offers automatic, highly accurate and speaker-recognizing transcripts with speaker labels and speaker tags for Google Meet and Zoom – in more than 25 languages.
For free.
Delivered alongside your video recording just moments after a call, these GPT transcripts can be instantly translated, and allow you to search your past conversations using keywords.
There are some potential drawbacks to using AI-powered transcription tools. For example, these tools may struggle to accurately transcribe certain accents or dialects, resulting in errors or omissions in the meeting notes. Personally, I think this will only affect the users within the tails of the standard curve and not a major issue. Time, sampling and learning will fix that eventually.
However, there may be some people who may be uncomfortable with the idea of their conversations being recorded and stored, which raises concerns about privacy and security. It is important to carefully consider the pros and cons of using AI-powered transcription tools and choose a platform that best fits your needs and preferences.
It is clear that lifecasting is not going away and AI-powered notetaking tools are making it easier to capture, transcribe, and share important information. However, it's important to consider the privacy implications of wearable cameras and to think carefully about how we want to remember and capture our experiences.
If you're interested in learning more about lifecasting or AI-powered notetaking, please reach out to Richard Bukowski for a consultation. He can help you and your team prepare for the future by training you on these technologies and ensuring that you're able to use them effectively.
Peace and Happiness