A Jungle-View Reflection on AI and Learning
- Corey'L Sams
- Jan 18
- 2 min read

A little under 2 weeks ago I spent time in Belize with my senior Cohort of the forty acres scholars program for our senior trip. While taking the time to relax amongst the jungle and pool view I looked at one of my friends and asked him:
"What do you think it means to optimally use AI?”
I consider him one of the smartest friends I know so I really value his opinion and was curious about his response. I was expecting some integral profound answer, but his simplistic answer surprised me.
He said: Optimal AI use is when you use it in a way that does not trade off learning. His answer emphasizes the importance of protecting the learning process. His answer also struck to me a different perspective that maybe optimal AI use is multi-dimensional based on your goal.
For my friend he told me a main part of his job and what he was excited for and what he'd been told from other mentors in his future role was how much they are continuing to learn deep into their career. This is a major reason he is pursuing that career because he wants a job where he feels he is continuously learning.
Learning is an area that has definitely been impacted by the AI space, but also has provided opportunities of enhancing the learning process. I still remember during my sophomore year a teacher shared a distaste of AI use. When I showed how I used it to study and prepare well for a test, he ended up liking the ideas so much that he presented my studying system to the class as an example of stewardly AI use for learning.
On another hand, it would be too simplistic to say deep learning is always the focus for why we use AI as a whole. Sometimes it’s increased strategic productivity to get a job task done, sometimes it’s exploration or ideation. But an important element I believe of optimally using AI is being aware of what you value in the instance. If it’s learning, protect that. If it's productivity, optimize while protecting that. If it’s creativity, optimize while protecting that.
Maybe the key to optimally using ai is not one size fits all definition but rather a filtering question: “what am I using this tool for and what does it look like to use it to protect that and what does it look like to hinder that?”
What do you think it means to optimally use AI?

Comments