The Subconscious Language
The subconscious language is a set of complex and subtle regularities that underlie a language. It is expressed by the body, via writing, pronouncing a noise or word, or performing a body movement.
A critical tool to read your environment for those in the ascension path, teachers or psychologists, is to be able to understand the language of the subconscious mind. How? By understanding words between the lines, body expressions, body pains, tones of voice, etc. Also, we can use it to understand ourselves, how Mother Earth works, Astrology and more.
For example you can say “yes”, but you nod moving your head left and right, really saying “no”. We most of the times don’t see the subtlety of such language, because we are not self-aware.
We then repeat the same word, movements etc, based on triggers. The subconscious language is its own language, that we share as a collective. The symbols like yes and no, or the color of a butterfly or a wink.
The subconscious language is the language of the soul, receiving messages from our higher self. This metaphoric language is also a universal one, shared by our collective and can be seen in dreams, fairy tales, myths and poetry. Imagine hearing your own spirit speaking to you in a voice you can understand.
The soul stores all memories and emotions, as well as, stores the patterns of how we react to certain situations. Patterns are based on previous experience, culture, etc. Also, since everything we think is stored in the collective, it brings us together as a shared human experience, as we tap into it, to get answers and use it as a base for our creativity.
Decoding the subconscious mind can be used for:
- Understanding the relations between two people doing Constellations/ Astrology work
- Reprogramming your mind with motivational healing messages
- Understanding patterns of live.
- And more …
There are various ways to understand the metaphoric language of the subconscious mind. Two examples are Reverse Speech, Symbology Decoding.
Reverse Speech
One popular tool is reverse speech, created by David John Oates, an Australian researcher, a philosopher and author. Reverse Speech allows us to journey into the depths of our own soul by using qualitative research methodologies. Here is the Instagram page:

Reverse speech is a hidden language embedded in our speech backwards. Reverse Speech is based upon the principle of complementarity, which simply means that the topic of the forward speech is the same as the topic of the reverse. Kind of like editing and correcting the conscious dialogue when needed.
Reverse speech is a non-invasive method and is an ultimate truth detector. A tool used to get in touch with our inner spirit. It accesses the language of the subconscious and has a language that is metaphorically driven through our connection from the emotional subconscious.
One of the most exciting aspects of Reverse Speech is its ability to reveal the voice of the human spirit. Reverse Speech reveals a constant dialogue between the spirit and the conscious mind. It gives us clues into our emotions stored in the past, it is the state of the human soul and our relationship to our God head.
Essentially, what David discovered was that our spoken language is actually bi-level. A second form of communication consistently occurs embedded backwards within our forward speech, outside of our conscious control or knowledge.
Language has a primary importance in Jungian psychology and its practice. C. G. Jung saw every act of speech as a psychic event. Even the “worker” words in language, like prepositions or conjunctions, carry particular archetypal energies, working dynamically in the conduct of transformational narratives, both for personal and collective purposes.
In fact, it appears that the sounds of our language probably even evolved in such a way that our messages could be communicated both forwards and backwards simultaneously. With the utilization of Reverse Speech we are able to show scientifically that language is a bi-level, forwards and backwards communication system. When human speech is recorded and played backwards with a certain speed and highlighting certain frequencies, you will hear the spirit speak in a clear audible voice that can be heard and understood by the conscious mind.
Reverse Speech research has discovered that children speak backwards before they do forwards. From as early as 4 months of age they are pronouncing simple words in reverse and by the time they reach 13-14 months of age, complex sentences are forming in reverse. Reverse speech is a phenomenal tool for children who have speech delays, ADD, ADHD, and dyslexia. It helps uncover the trauma, the cause and allows for correction of the past trauma.
It’s an imperative tool used for psychoanalysis and reprogramming the subconscious through hypnosis and the uncovering our latent powers residing within your soul, your mission and vision.
Symbols
Symbols are representation of ideas in text or geometries (e.g. mark, signs). Symbols represent ideas, cultures, etc. One example is the Tetragrammaton. We decode symbols that allows us to uncover the patterns of the mind, understand ourselves and the word that surround us.
An action can represent a symbol, an idea, like when we nod or salute 🙏.
Here are two examples:
- Left: feminine, creativity, moon
- Right: masculine, action, sun
Toolkit
ParchesU is implementing a Scholar App, that contains a toolkit to decode the subconscious mind for healing purposes. For example, we are going to super monitor students , which we will have lots of data to play with. We will also see the work performed by the students, like paintings, songs they are singing, the color of the sky, etc, allowing to code AIs systems to not only decode the souls speak, but also what impacts different reactions and the most successful modalities of healing.
Here is an episode that provides more details about this amazing technology and our ideas about apps that we can build on, using AI in a highly monitored environment. The app will play a key role in the ascension path of the scholars.
Available Speech Recognition Frameworks
The tool, which integrates seamlessly with the Scholar App, requires a speech recognition framework, to process to speech, reverse them and analyze them.
1. Cloud-Based Speech Recognition APIs: Power and Convenience
For high accuracy, scalability, and ease of integration without the need to manage complex infrastructure or train models from scratch, cloud-based APIs are an excellent option. These services are offered by major tech giants and specialized providers:
- Google Cloud Speech-to-Text: Renowned for its accuracy and extensive language support (over 120 languages and dialects). It offers features like real-time transcription, speaker diarization, and model adaptation.
- Amazon Transcribe: A strong contender, especially for those already within the AWS ecosystem. It provides features like automatic language identification, custom vocabularies, and speaker labeling.
- Microsoft Azure Speech to Text: Integrates well with other Azure services and offers robust features, including real-time transcription, customization for specific domains (like medical or call centers), and support for numerous languages.
- Rev AI: Known for its high accuracy, particularly in challenging audio environments. It offers both automated and human-powered transcription services and APIs.
- AssemblyAI: Focuses on providing a developer-friendly API with features like summarization, content moderation, and topic detection in addition to core transcription.
- Deepgram: Emphasizes speed and real-time transcription capabilities, making it suitable for applications requiring low latency.
Pros: High accuracy, scalability, managed infrastructure, broad language support. Cons: Ongoing costs (pay-per-use), less control over models, potential data privacy concerns for some applications.
2. Open-Source Speech Recognition Toolkits: Customization and Control
If we want to control our own tools and customize them, including having our own specific data to train on, and with offline capabilities open-source toolkits are the best way to go. This will alo reduce cost of the cloud solutions. Here is a list of open source frameworks.
- Kaldi: A powerful and highly flexible toolkit widely used in the research community. It offers extensive features but has a steep learning curve and requires significant expertise.
- Whisper (by OpenAI): Gaining immense popularity due to its impressive accuracy and multilingual capabilities, trained on a massive dataset. It can be run locally and offers various model sizes to balance performance and resource usage.
- Mozilla DeepSpeech (Development Winding Down): While influential, its development by Mozilla has ceased. It’s based on TensorFlow and offers real-time capabilities but may lack ongoing support.
- Vosk: A lightweight and offline-first toolkit, suitable for mobile and embedded applications. It supports multiple languages and platforms.
- SpeechBrain: A PyTorch-based toolkit designed for ease of use and flexibility in building various speech processing applications, including ASR. It integrates well with Hugging Face.
- Wav2Letter (by Facebook AI Research): Known for its C++ implementation and focus on end-to-end ASR systems.
Pros: Free to use, full control over models and data, offline capabilities, strong community support for some projects.Cons: Requires significant technical expertise, can be resource-intensive to train and deploy, may require more effort to achieve high accuracy.
3. Python Libraries and Deep Learning Frameworks: Building Blocks
For developers comfortable working at a lower level or integrating speech recognition into Python applications, these libraries and frameworks are essential:
- Hugging Face Transformers: A must-have for the modern AI developer. It provides easy access to a vast collection of pre-trained models, including many state-of-the-art ASR models like Whisper, making it easy to experiment and deploy.
- TensorFlow and PyTorch: The foundational deep learning frameworks. Most ASR toolkits are built on top of these. Choosing between them often comes down to developer preference and ecosystem integration. PyTorch is often favored in research for its flexibility, while TensorFlow has strong production deployment tools.
SpeechRecognition
(Python Library): A convenient wrapper library that provides a simple interface to several popular speech recognition engines and APIs (including Google Web Speech API, CMU Sphinx, etc.). It’s great for quick prototyping and simpler applications.- NVIDIA NeMo: A toolkit specifically designed for building conversational AI applications, including ASR. It leverages PyTorch and is optimized for NVIDIA GPUs, offering high performance.
Leave a Reply
You must be logged in to post a comment.