Will there be a more sample-efficient pretraining algorithm than next token prediction for NLP before 2027?
➕
Plus
10
Ṁ623
2027
43%
chance

Will a pretraining algorithm for language models which meaningfully improves on the sample efficiency of next token prediction be widely known before 2027?

Some details:

  • The technique must involve self-supervised learning on unlabeled data

  • The technique must have documented scaling properties which meaningfully outperform next token prediction in test perplexity with respect to data, for whichever model architectures are popular by 2027

    • It's fine if there are tradeoffs with compute efficiency

    • It's fine if next token prediction outperforms the new technique early in training, or for small training runs, as long as scaling trends predict that the new technique would be better on runs using at least 10^26 FLOP and 15T tokens (roughly the budget of Llama 3 400B)

  • It must be accepted within the ML community that the technique is broadly superior to next token prediction (even if there are some tradeoffs) and has the potential to scale to outperform the best prior models trained using next token prediction

  • To validate the scaling potential of the method, it must be used to train a model which qualitatively matches or exceeds GPT-4 (if the above conditions hold before 2027, I will wait until July 2027 for such a model and will resolve YES if one is produced)

Get
Ṁ1,000
and
S3.00


Sort by:
10mo

Define token. What about latent space tokens?

10mo

@JohnCarpenter Tokens are discretizations of data, generally text but possibly other modalities. In autoregressive language modeling, the goal is to produce probability distributions over token sequences. Currently, most tokenizers produce 0.75 tokens per word, but this might change. I’m not referring to latent space tokens. When I mention “15T” tokens in the resolution criteria, I’ll specify that this refers to an equivalent amount of unlabeled training data, even if tokenization methods change.

bought Ṁ30 YES10mo

What about latent diffusion language models?

@RemNi My understanding of the method is that an autoencoder is pretrained and diffusion is used to model the latent space. I’m not familiar with how the technique can be used for autoregressive language modeling, but if there’s a way that works and its scaling is borne out in the manner required by the question, then it would count.

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Win cash prizes for your predictions on our sweepstakes markets! Always free to play. No purchase necessary.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like trading still use Manifold to get reliable news.
How do I win cash prizes?
Manifold offers two market types: play money and sweepstakes.
All questions include a play money market which uses mana Ṁ and can't be cashed out.
Selected markets will have a sweepstakes toggle. These require sweepcash S to participate and winners can withdraw sweepcash as a cash prize. You can filter for sweepstakes markets on the browse page.
Redeem your sweepcash won from markets at
S1.00
→ $1.00
, minus a 5% fee.
Learn more.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules