pull down to refresh

0 sats \ 1 reply \ @Filiprogrammer 4 Dec \ on: P2MS Data Carry Part 2: UTXO set analysis - @deadmanoz bitcoin
Yes!
And should also probably include P2PK.
223 sats \ 3 replies \ @Filiprogrammer 2 Dec \ parent \ on: A soft fork is a 51% attack on Bitcoin bitcoin
One turns Bitcoin from decentralized into centralized.
The other is just the usual decentralized consensus forming.
271 sats \ 17 replies \ @Filiprogrammer 2 Dec \ parent \ on: A soft fork is a 51% attack on Bitcoin bitcoin
Let's imagine a fork happens and 40% of hash is on the new rules and 60% of hash remains on the old rules.
The chain with the old rules outgrows the one with new rules. (for now)
Let's say at a later point in time more miners switch from the old rules to the new rules. Now 60% of hash is on the new rules, while 40% is on the old rules. In this case the new chain will eventually outgrow the old one. Now the new chain is also the valid one according to the old rules, since it is longer. Thus the blocks on the old chain are replaced by the ones on the new fork.
A 51% attack would be a single entity reorging the chain because they have the majority of the hash rate.
A soft fork does NOT require a 51% attack. We had soft forks in the past. It just requires consensus among at least >50% of the hash rate.
This refers to the shut down Samourai Whirlpool.
But the coordinators listed on https://liquisabi.com/ use the WabiSabi protocol used by Wasabi Wallet which does not share the users xpub.
envolves more trust?
Yes, if you are not connecting your wallet to your own node. If you don't run your own node, you don't validate consensus rules.
Read more about it here: Why I don’t celebrate Neutrino
Well, there is work on a Rust compiler on top of GCC:
https://github.com/Rust-GCC/gccrs
GCC Front-End For RustThis is a full alternative implementation of the Rust language on top of GCC with the goal to become fully upstream with the GNU toolchain.
They don't monitor the entire network, they only count the peers that are currently connected to their nodes.
Notes: Data updated every 2 hours. Data based on actual reported peers to our Bitcoin nodes.
I did it with this Python library: https://github.com/lnbits/lnurl
import lnurl
lnurl.decode('lnurl1dp68g...')
There is also https://knotsgoup.vercel.app/ which uses the Bitnodes API.
More likely one of those:
the sender doesn't have anymore enough funds for that redeem
the LNURL was already redeemed
How stable is Fulcrum really? I read people complaining about database corruption.
edit: Just answered my own question by looking at the Fulcrum 2.0 release notes:
Fulcrum 2.x series will no longer suffer from this problem and the process can be killed at any time (including abrupt powerloss), without any database corruption. At worst the last few blocks worth of data is rolled-back and Fulcrum will re-synch from the rollback point.
Generally agreed, but look at the progress of quantum computers:
In 2001 the number 15 was factored with Shor's algorithm on a quantum computer.
In 2012 Shor's algorithm was applied on a quantum computer to factor 21.
And now it is the year 2025 and we are still on 21.
In practical cryptography we use numbers that are about quattuorvigintillion times larger.
Exactly. This would only impact spammers (NFTs, BRC-20 tokens...). It would not stop Runes (Shitcoins encoded in small OP_RETURNs) btw.
Sphincs+Size optimized, security level 1: 7.8kb (schnorr 64 bytes)Since standardization, some optimizations have happened:Sphincs+C (extra 700-1000 hashes to grind during signing, but verification is better): 6.3kb signature sizes
Shows you how huge Quantum resistant signatures are. For the same security level, signatures would have to be around 100x larger than today with ECDSA. This would be a big waste of precious block space.
Also quantum computers are far from being practical anyway. They can just about break 21 into its prime factors: 21=3*7. Easily solvable in my head.
If a transaction doesn’t use OP_RETURN or any nonstandard script types, it should still be considered valid under both the old and new consensus rules. That means such “normal” transactions could exist in both mempools (of nodes that follow BIP444 and those that don’t) and wouldn’t cause reorg-related issues even in a mixed network. Am I thinking about this correctly in that standard transactions are effectively immune to any BIP444-induced reorgs because they sit in the intersection of both rule sets with high enough fees?
Yes, regular monetary transactions are unaffected by BIP444.
The soft fork would only limit the ways to put big chunks of data into transactions. (OP_RETURN, Inscriptions...)
so it moves into the "temporary soft fork" territory, that while possible and legit in itself is clearly suboptimal in terms of communication and education: many users don't understand it
A temporary soft fork is not new. It has already happened in 2013:
https://en.bitcoin.it/wiki/BIP_0050#Immediately
Done: Release a version 0.8.1, forked directly from 0.8.0, that, for the next two months has the following new rules:
- Reject blocks that would probably cause more than 10,000 locks to be taken.
- Limit the maximum block-size created to 500,000 bytes
- Release a patch for older versions that implements the same rules, but also increases the maximum number of locks to 537,000
- Create a web page on bitcoin.org that will urge users to upgrade to 0.8.1, but will tell them how to set DB_CONFIG to 537,000 locks if they absolutely cannot.
- Over the next 2 months, send a series of alerts to users of older versions, pointing to the web page.