"Anyone can learn anything they want..." and how technology can help
In the study, I wonder if the learning curve pleatue (i.e. no longer improving) . This is extremely common in Chess ELO ratings. After certain level, players seem unable to improve anymore, no matter how much they practice. It's like there is a brain limitation i.e. while everyone can improve to a certain degree, not everyone can reach the same level of mastery.
I like your post -- until the end. This sounds like statistical pseudo-science: "At amateur levels in sports, for example, practice accounts for 10-30% of the difference in performance, quite a large proportion, but at the most elite level, it is only 1%."
False precision, over-generalization.
Thank you for your insights. I tried to get ChatGPT to train me to be assertive yesterday and while I had to remind it of some things (e.g., don't give me examples, let me respond to feedback first) it worked well. This is amazing stuff. The possibilities are endless!
doesn't work anymore - this is what I get
As an AI language model, I am not able to engage in interactive role-play scenarios, but I can provide you with guidance on how to improve your negotiation skills. Deliberate practice is a great way to improve your negotiation skills, and there are several steps you can take to make the most of your practice sessions:
Deliberate learning is very good, indeed. But what makes it complicated and hard is that we became familiar with easiness. Nowadays, we get everythong easily, from food to dating, so making an effort seems so challenging to us.
What do you think about about applying this to ai art and voice acting? Seems a lot if people are extremely angry about those. Are they right in your opinion?
There is a complementary post Ozan Varol wrote a while back: https://ozanvarol.com/the-problem-with-deliberate-practice/
What's the source for the footnote comment re: elite performance?
great post! important issues here about the nature and benefits of practice
regarding the initial findings about surprisingly consistent learning slopes, I'd be very surprised if some form of regularization (constraint on slope variance) wasn't happening there