On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
➕
Plus
41
Ṁ5710
2026
84%
chance

Tracking external bet: https://www.isattentionallyouneed.com/

This will resolve according to that site, whatever the conditions and caveats they have agreed on.

Get
Ṁ1,000
and
S3.00
Sort by:

Is mamba-trwnsformer hybrid transformer like? Falcon H1 is an example of such architecture.

@ProjectVictory does it use some kind of attention mechanism?

What constitutes Transformer-like?

Is it all, or some, of these?

  • Token inputs, token outputs

  • Positional encoding

  • (MH QKV SDPA -> MLP) layers

  • Residual connections

@MalachiteEagle Transformers, yes. Attention, not really:

followed by two shared attention blocks.

https://blog.rwkv.com/p/eagle-7b-soaring-past-transformers

It even links the wager page in its bragging points:

All while being an “Attention-Free Transformer”

I thought this wager was made today haha

Does this strictly resolve to the outcome of the bet?

@FranekZak Yep. Whatever details they come up with are in.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules