pull down to refresh

a soft fork has to be implemented, I'm not sure if that fact is even being debated at this point

I'd challenge that. The only reason to have to do a softfork is vulns. Instead, follow the money and you'll see why OP_CAT got so much attention when it did. When it failed to get momentum, you'll see that the same money source shifted to something different.

You see, the real issue that either OP_CAT, or OP_CTV solves is that someone's bridge that does work on Ethereum does not work very well on Bitcoin without a softfork for some enhanced script primitives. I'm quite sure they would love a OP_CHECKZEROKNOWLEDGEPROOFVERIFY too. And that will solve the problems you list too, according to them.

I've extracted the following "problems" from your post:

  • the L2 scaling issue
  • the security budget cliff
  • the quantum threat

Do you have for each of these:

  1. The definition that you're using of these problems so that they can be assessed, and
  2. The solution you're alluding to, using OP_CAT

Because without that, it is impossible to even start validating if what you claim is true. So what problem definitions, and which explanation that OP_CAT solves all this, inspired you to post this?

L2 scaling issue, only perhaps 10% of the global population can be onboarded onto L2s right now. There's not enough theough put on L1 to handle L2 channels at scale.

The security budget cliff. You can do the math, if you look at the halving schedule and the amount of transactions per second that L1 can handle, there's a maximum amount of capital you'll be able to extract from transactions, and it is much less than there is today to pay the miners and secure the network. The only way to have enough transaction fees in the future is to have covenants or something similar. That way you can have massive LN channels that carry the economic weight of thousands of transactions. That would make high L1 fees meaningless.

The quantum threat, should be obvious to anyone who has been paying attention.

Everything I've said is verifiable, there's no legitimate challenge that a soft fork is going to be eventually needed IMO.

reply

I'm sorry, was it rude of me to challenge a premise? I'm sorry I wrote anything...

Let me go through it regardless, because you were complaining you're not getting answers:

1. L2 Scaling1. L2 Scaling

First off, I think what you mean is L1 scaling, not L2 scaling, per your explanation:

There's not enough [throughput] on L1 to handle L2 channels at scale.

How does OP_CAT fix this? Is it needed for Channel Factories (multi-party LN channels)? From what I understand, there are protocols proposed that don't need any covenant, but noting that if you were for example to use Ark (the protocol) with CTV you'd be able to in theory get a more efficient factory.

Maybe, but I don't remember seeing a single definitive work on this (do you have it?), CAT + Schnorr tricks can help with this? But then, is that a real long-term solution and does it actually get resistant to Shor's (your third topic, PQ)? I'm not convinced of either.

Another question to ask yourself is: if utreexo becomes widely used, is there a reason to keep the L1 throughput low? In fact, I don't see how 10% of the world can non-custodially interact with Bitcoin without utreexo: L1 sync (of your txs and their recent ancestry) is always a must-have. You cannot operate without it. What you can do without is the archive of every L1 tx ever made.

2. Security budget cliff2. Security budget cliff

the amount of transactions per second that L1 can handle

Fee pressure rises as people get priced out of L1. Isn't it therefore logical that for this you don't want to solve the previous item too well? If the capacity equals the world population, then everyone will get their tx in at base fee, and fee would only rise when there are events where either everyone wants to close their channel at once, or there is some new kind of spam and then we get another 5 years of Luke & co drama. Hardly a good income source for miners.

As you note:

there's a maximum amount of capital you'll be able to extract from transactions

The problem is demand, of value, so my question about this topic is different: which demand of value will OP_CAT unlock? I mean it like: if it gets activated, who will now start making transactions that didn't before and the best test for it is: what will you do with Bitcoin that you aren't doing now? If you had OP_CAT today, what would you be using Bitcoin for?

That way you can have massive LN channels that carry the economic weight of thousands of transactions.

But why OP_CAT? Why not Ark+CTV? What makes you so sure that OP_CAT fixes this?

The quantum threatThe quantum threat

should be obvious to anyone who has been paying attention.

It's not obvious. What is obvious is that Shor's algo is a potential threat to anything depending on the Discrete Logarithm Problem not being broken. This has been obvious since 1992. So I agree, it would be prudent to find a solution that doesn't depend on DLP.

You are 100% correct that OP_CAT doesn't depend on the DLP. It is literally an instruction to do byte concatenation: 0x13 0x37 OP_CAT => 0x1337. It also doesn't solve "the quantum threat", but it could enable some rather complicated scripts that do 32-bit arithmetic. These scripts will always introduce byte overhead, which goes immediately against your issue #1 and #2, because overhead means lower tx throughput, and it means the same value of sats is extracted more fees from, making transacting less economical.


Yes, I do understand the zk-rollup idea. But then how much is going to the sequencers and how much is really coming back as L1 fees? Are the ZK sequencers going to pay 2 coins fee for every mainnet tx they make even if they can get away with maybe 10-50k sats and still be sitting on top of the mempool under current fee pressure levels? What does that mean for the fees paid on their rollup? If you overprice, you will lose demand?

reply

I appreciate your detailed breakdown and challenging the premise, it's exactly why I posted it in the first place.

You definitely make a fair point about CTV vs CAT, and i definitely haven't made a firm opinion on either yet. My main argument for CAT is that it's more general purpose. I'm open to the idea that maybe that's a bad thing though.

I also 100% agree that you don't want to solve the fee problem too well. If you have infinite block space, there is no fee market, and no amount of throughput will fix that. I got into a disagreement with a shitcoiner that thinks their protocol "fixes bitcoin" by having nearly infinite block space, and to me, that just sounds like a ticking time bomb.

I also agree that using CAT for quantum resistant scripts introduces massive byte overhead, I'm currently not sure of any solution to quantum that wouldn't do that though.

Sorry if I sounded standoffish in my original reply, tjst wasn't my intention, I actually really appreciate your effort.

reply

@k00b thanks for giving your opinion on SN live. I was bummed that I didn't see you in here and wanted your opinion.

reply
1073 sats \ 2 replies \ @k00b 18 Apr

My bad. I spend most of my time on SN doing customer service. I need an alt so I can let it rip.

reply

Hmm I meant to respond to the main body of the post, not randomly in the middle of the thread, either way, im sure optimism doesn't mind being tagged in.

I honestly do like hearing the opinion that a soft fork isn't even necessary from somone I consider to be extremely knowledgeable on the subject. I'll keep diving into different solutions, but are there any BIPs that you currently like, or do you just think these issues are overstated/solvable with the current state of things.

I definitely like your stance that a fork will happen if it NEEDs to happen, and that that preempting possible issues isn't completely necessary.

reply

Is fine, plus you got zaps on comments that otherwise would have been unlikely for me to have seen.

reply

No worries.

I was and still am interested where you got the impression that OP_CAT fixes everything, and I still don't know, haha. Yes, it's a powerful primitive to have, also if you look at the greater set that is being proposed for restoration in BIP-441, but it has a lot of tradeoffs if you're actually going to use it to permanently solve problems and I don't think that it will concurrently solve all 3 of the problems you mentioned; maybe it could help solving each individually, in a quirky and expensive way, at the cost of the other 2.

That's what bothers me about most of the discourse that say "it's either x or y and it has to happen"; it's a lot of repetition of narratives that magically lost all sense of tradeoffs and caveats. But despite all the remarkable conspiracy theories, I'm rather confident that if there was a solution to all 3 your criteria without any tradeoffs at all, it would have been PR'd and merged by now.

I'll ask my last question: if you had to choose between your 3 problems and you could only solve one, which one would you pick, and why?

reply

So this is just my understanding.

It allowes for Hashed based signatures which would mitigate the quantum threat. It would introduce covenants, which solve the scalability issue, and my theory is that by solving the scaling issue, the security budget gets solved at the same time with adoption.

I know there's a large data weight involved, but it's my thought that CAT will allow that extra weight to be absorbed.

I'm definitely not calling CAT a silver bullet, and I understand that there's other friction involved, but, that's the fun of having these conversations.

As far as which of the three, if I had to pick one, it would be the quantum threat. You could reduce the hashrate of the entire network by 50% and still have practically zero threat of a 51% attack. I think there's a fairly large runway, and I also think it's actually possible that some level of homeostasis occurs and the network just finds a natural balance that maintains a high level of security.

reply

Right, so your assertions are:

  1. We can introduce lattices or something similar with OP_CAT while at the same time using it with Schnorr-tricks to get covenants. I don't think this would be very efficient, but it may be possible to hack a thing here and there.
  2. Covenants somehow will create demand. My question to you about that is still open above: what will you do with Bitcoin that you currently do not when you have it? Or will covenants allow you to orange pill easier? How? This is the one question no one seems to be able to really answer.
  3. CAT will allow that extra weight to be absorbed. What do you mean? There's 4MB. If everyone fills their tx with 100x the data to do a lattice-based sig, then you have 100x less space. I don't see how magically, OP_CAT will turn 4MB into 400MB? Must be my fever making me dumb haha.
  4. The security budget gets solved at the same time with adoption. At the cost of pricing out 99 of 100 bitcoiners. Because that's what "security budget" is if you take away subsidy: the price you and I pay for a L1 transaction.
reply

Ahh, I think maybe we crossed some wires. I don't necessarily think covenants will create demand. I think we will need them to meet the eventual demand. As it stands right now, I don't know that we would need to soft fork this exact instant.

As far as absorbing the extra weight, a ZK roll up can take 100,000 data heavy transactions and process them off chain. It then uses a much smaller mathematical proof of that batch on L1. So it takes 400mb of transactional data, and compresses the settlement proof into a few kilobytes. So, to your first point, that's why unlocking layer 2 roll ups is important. We wouldn't need to do all of the work on layer one, so the efficiency isn't as big of a deal.

As far as pricing out 99 of 100 bitcoiners goes, you and I wouldn't be paying the massive fees. The roll up sequencer pays the fee, and because it's batching 100,000 individual transactions our cost on the l2 is .005c and the miners still get the big pay out.

reply

Right, we get to the crux of the answer to your 3 problems:

  1. Scaling issues get solved through compression. Correct.
  2. Security budget cliff does not get solved unless <magic>. Compression reduces fee pressure, no matter how you put it.
  3. Quantum only gets solved for those on the rollup. To protect your coin, you MUST switch to another network. (And then it is no longer your coin.)

PS: In the case of OP_CAT, you must mean Starkware and I thought I saw the hand of them / their affiliations in some of the things you write. Did you check how their network is going? Their lil token?

So what this means is:

  • We get "infinite"-ish transaction space, but the cost of providing this space is sitting at the sequencers, not at the Bitcoin miners.
  • Because of this, the Bitcoin miners will not be helped much, and most likely even be hurt. Money flows will mutate from a fee payment to a miner, to a fee payment to a sequencer.
  • At low subsidy, miners will be incentivized to form a price cartel and to centralize further.

What isn't solved? Bitcoin's problems.

199 sats \ 2 replies \ @ek 12 Apr
If you have infinite block space, there is no fee market

And it limits who can afford to run a node

reply

Have you looked at mandacaru (#1466811)?

reply
196 sats \ 0 replies \ @ek 12 Apr

No, but I looked at utreexo.

My understanding is that with utreexo, instead of storing the utxo set in GBs (~/.bitcoin/chainstate), you can store it in a few MB with a merkle tree. You verify transactions with inclusion proofs for the outputs they spend. If you want to be a node to help other nodes sync, you would still need to store the blocks (~/.bitcoin/blocks).

Is that wrong?

reply