Weekly I/O #55

Visual before Context, Lossy LLM, Motivation-hygiene Theory, Desirable Difficulty, No More Time

Cheng-Wei Hu | 胡程維
4 min readMar 5, 2023

Weekly I/O is a project where I share my learning Input/Output. Every Sunday, I write an email newsletter with five things I discovered and learned that week.

Sign up here and let me share a curated list of learning Input/Output with you 🎉

The below is extracted from the email sent on Feb 26, 2023

1. Jonny Harris’s storytelling formula: Visual anchor before contextual bridge. Experience it and then understand it.

YouTube: Why every Johnny Harris video goes viral

What makes Jonny Harris’s storytelling compelling? He uses the formula “visual anchor before contextual bridge”.

Unlike traditional storytelling in journalism like TV news, where the narrative establishes the context first and then shows footage as evidence, Jonny Harris does the opposite.

When telling a story, he first thinks about what’s the best visual anchor to make the audience curious. He puts visuals upfront and provides context later. He often says, “look at this thing!” when offering visual anchors to let the audience experience it. Only after that he gives contextual bridges to let the audience understand it.

He creates continuity by repeating these visual anchors throughout the whole video with only necessary contextual bridges. In his storytelling, contexts are the minority of the video and always come after the visual anchors.

2. ChatGPT is like a lossy compression of all the text on the Web. If it takes material generated by itself as training data for next model, the output will be worse. Can LLM’s output be as good as its own input?

Article: ChatGPT Is a Blurry JPEG of the Web

This might not be factually true but I found the analogy interesting. In Ted Chiang’s words:

“Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp”

“But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large-language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large-language models and lossy compression is useful. Repeatedly resaving a creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.”

“Indeed, a useful criterion for gauging a large-language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either. Conversely, if a model starts generating text so good that it can be used to train new models, then that should give us confidence in the quality of that text. (I suspect that such an outcome would require a major breakthrough in the techniques used to build these models.) If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.”

3. Herzberg’s motivation-hygiene theory: Job satisfaction and dissatisfaction are independent. Improving factors that cause dissatisfaction does not necessarily lead to satisfaction, and vice versa.

Article: Herzberg’s Motivation Two-Factor Theory

Herzberg’s motivation-hygiene theory (aka two-factor theory) posits that job satisfaction and dissatisfaction are independent and have separate sets of factors (motivation and hygiene factors). Improving motivation factors like personal growth, performance, and positive societal impact increases satisfaction. On the other hand, enhancing hygiene factors like salary, working conditions, and policies decrease dissatisfaction.

Therefore, high pay in a meaningless job (golden handcuffs) can bring neither satisfaction nor dissatisfaction. To some degree, it brings emptiness. Conversely, a meaningful job with long working hours can bring both satisfaction and dissatisfaction, which can be overwhelming.

4. Desirable Difficulty in learning: Harder retrieval leads to better learning, given retrieval is successful. Test yourself before we think we are ready. Recall without hint.

Book: Ultralearning

Many studies have shown that retrieval practice, such as self-Q&A and flashcards, is more effective for learning than review practice. However, we still tend to prefer review because we can’t accurately gauge our own understanding. While review feels smoother and easier, retrieval practice is challenging and exhausting. Therefore, the easier one makes us feel like we understand it better.

Psychologist R. A. Bjork adds another concept of “desirable difficulty” to the effect of retrieval: “More difficult retrieval leads to better learning, provided the act of retrieval is itself successful.” Retrieval tests without hints are better for retention than ones with hints, and tests that require recall are better than recognition tests (tests to recognize the answer rather than think of one).

The difficulty isn’t an obstacle to making retrieval work. It’s the reason it works. Therefore, testing ourselves before we think we’re ready can be more efficient for learning. Aim for the harder tests.

5. “One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.” — Paulo Coelho

Quote

That’s it. Thanks for reading and hope you enjoy it. If you would like to receive the content every Sunday, sign up below 😎

--

--

Cheng-Wei Hu | 胡程維

Subscribe to chengweihu.com for new article and newsletter! 新的文章還有電子報可以在 chengweihu.com 訂閱