Study: 74% of Bitcoin Mining in China; Identifies Threat ...

B(TC)itcoin is slow || My small and humble contribution

https://panzadura.github.io/B(TC)itcoin-is-slow/itcoin-is-slow/)

This is my small and (very) humble contribution: *English is not my first language so I apologize for spelling mistakes and inaccuracies.

B(TC)itcoin is slow

Bitcoin is slow because the block size was left at 1MB - 2MB with Witness Data on the SEGWIT network - after throwing the entire "team" developer of GitHub and being occupied by developers of what is now known as Blockstream.
This size has been maintained and keeps referring to two issues: Mining in China and the decentralization of the nodes or transaction validators that you point out in the article.

Mining in China occupies a good part of the pie that miners distribute - in turn these are the ones that confirm the transactions and undermine the blocks - since 2011 and these Chinese farms are behind something that in the West call "The Great Firewall "that prevents a stable connection and slows down the propagation of the block, its mining and confirmation of the transaction over 3 minutes [1] [2] causing a large part of the mining coming from China and therefore the power of 'Hash' decreased drastically affecting the security of Bitcoin; The less Hash the greater the possibility of being attacked by the Bitcoin network through a 51% attack that could cause double spending - although this gives rise to many debates since the 51% attack on an already "mature" network like Bitcoin requires a Considerable expenditure on mining equipment to control 51% of the mining power and receiving the block reward and the commissions for confirmed transfer on each block would make it less likely that said miner or mining group would like to make a double expense upon receiving sufficient economic compensation. So only a malicious agent with the intentions of destroying the network and assuming the total losses on the investment of equipment would be willing to carry out such operation. Possibilities exist but these are reduced by being the miner compensated for their activity.

In the same references to Chinese mining farms but in another more economical field; Bitcoin has 21 million that are obtained through mining and commissions on transfers. These 21 million are achieved over time and from there it becomes a deflationary element as there is no possibility of printing more coins. The question of the Bitcoin block costly and the influence of Chinese mining goes through the Bitcoin subsidy or, currently called as, block reward: When a miner puts a block in the chain he receives the Bitcoin reward that is "inside" "of that block and which is currently encrypted in 12.5. Every 210000 blocks the reward is reduced by half so in less than a year (312 days from today [3]) it will be reduced to 6.25 so the miners will see their subsidy fall in half unless Bitcoin's price per coin increases considerably or the mining farms begin to close or reduce mining equipment thus decreasing the power of the network's Hash. If Bitcoin reduces by half every 210000 blocks the subsidy per block to miners will come a time when they can only live and maintain their equipment for transaction fees and in a Bitcoin network with 7 transactions per second and a commission that tends to Increase the higher the number of movements in it makes it unfeasible for miners to continue in said 1MB network and above all that people want to use this payment method that is expensive and slow - more even than gold paper - Because remember that Bitcoin born as Peer 2 peer cash, not gold-.
Therefore, if in time the subsidy or reward is going to be 0 or unable to cover the mining equipment expense, it is necessary to find a solution if the developers do not want to touch the block size. And this goes through three issues already raised in BIPs and about the community: RPF (Replace By Fee), Lightning Network and Increase in the number of Bitcoin since the demand for Bitcoin does not rise because it offers a quality service but for security and above all for the manipulation of Tether (USDT) and the large exchange houses:

- The RBF consists in the substitution of a transaction without confirmations for another that would replace it with a higher commission eliminating the previous one of the mempool - the limbo of the transactions to be confirmed in Bitcoin -. Although this system seems effective, it does not eliminate the long-term problem of continuing to maintain the reduced block, but rather removes the problem of financing miners, but does not eliminate it and, above all, kills the operation of Bitcoin transactions by not eliminating the increase in commissions that would distance the user from its use. In addition to more easily allowing double spending [4] [5].

- Lightning Network is a side-chain or second layer, that is, a software development not implemented in the Bitcoin network itself and therefore is not an element of the block chain so this should already be repudiated since being a External and non-auditable element such as Bitcoin gives rise to "blanks" and therefore lack of existence and possibility of auditing accounts [6] and even the loss of money or cancellation of the transaction [7] [8]. It also faces the problem of routing since in a network in constant change with the openings and closures of payment channels it is unfeasible to establish a total and rapid diffusion to the nodes of LN - other than those of Bitcoin - so it comes into play Another new element of this network is the watchtowers in charge of ensuring compliance in open channels and over the entire LN network of payments. Obviously it requires an additional cost to hire this service and it is not yet implemented [9] and taking into account the pace at which Lightning Network is developed, it is doubtful that it will become available [10]. In short, to use properly - which is not successful - LN you need a node valued at $ 300 [11], a watchtower, have a channel open 24/7 and with sufficient funds to carry out transactions [12] [13] [14] .

- The increase in the Bitcoin offer was raised fleetingly by developer Peter Todd [15] [16] and will become an open debate in a few years when the mining block reward is low and the price of Bitcoin cannot be sustained only with uncontrolled printing of Tether and the manipulation on the price of the currency [17] [18] next to the collusion of the exchange houses headed by BitFinex [19] and personalities of the world 'crypto' [20] - if he survives long enough to see that moment since they are already behind Bitfinex for money laundering [21]. When that moment arrives I am sure that a BIP - Bitcoin Improvement Proposal - will be launched by Blockstream or directly notified of the measure destroying the essence of Bitcoin and the TRUE DECENTRALIZATION: THE PROTOCOL.

This brings us to the second reason for the slowness of Bitcoin. The correct and true decentralization goes through the code and the team of developers and maintainers, not any other. The protocol must be engraved in stone [22] and that the action of the miners distribute and decentralize the network and they maintain the nodes and the transactions in a completely capitalist economic relationship. Investing in machines and communication improves access, speed and spread of transactions and blocks and makes miners true competitors as well as facilitating the transmission of money and all kinds of transactions [22].
The decentralization of the nodes was the other great reason to prevent the increase of the block and therefore the speed in the transaction. It is based on a false premise to base the decentralization of Bitcoin - which is nowhere on the whitepaper - on the raspberry nodes. The dispersion of the transaction and all the stages of the transaction and the blocks depend on the miner and his team, as well as the search for excellence in communications to avoid orphan blocks - which are stipulated in the Nakamoto consensus and are part of Bitcoin and not they throw no problem in the transactions only in the resolution of the reward of the block that affects the miners and should seek greater efficiency - and reorganizations. The audit on the Bitcoin network can be perfectly performed without there being a Bitcoin node in each house, in fact it would cause the same routing problems that occur / will occur in the LN network.
Decentralization should not go through nodes but through developers and to a lesser extent by miners. If a protocol is continually being altered by developers they have the power of the network and it must be in constant struggle by the miners through the commission on transactions.

Due to these two factors, the BIP0101 proposed by the developers that Satoshi left in charge [23] and that originated the creation of Bitcoin Unlimited was rejected, later it was attacked due to its recent creation through DDoS attacks in a statement of intentions of the network Blockstream bitcoin [24] [25] remaining as a residual element.

These two reasons are the cause of the drowning suffered by the Bitcoin network - including many other elements that were eliminated and that corresponded to the initial code completely changing the nature and destiny of Bitcoin that are not relevant and I will not enumerate -, Any other reason is propaganda by those who want to keep Bitcoin drowned in order to enrich themselves with mining sub-subsidies and second-layer software like LN. Bitcoin has a structure similar to gold and can collect certain attributes of it but its destination in efficient and fast transmission as effective - among other transactions.

Bitcoin was designed to professionalize miners and create a new industry around them, so mining centers will become datacenters [26] and they will replicate all transaction logs and even this professionalization will eventually lead to specialization in other types of transactions born new industries around you that will support the nodes according to specialization - Data, asset transfers, money, property rights, etc ... -

Bitcoin scales to infinity if they leave the protocol FREE enough to do so.

P.D: Core, since the departure of Hearn and Andersen, they know perfectly well what they are doing: The worst breed from the Cyberpunk movement has been combined with the worst breed of the current synarchy; The ends always touch.

[1] https://np.reddit.com/btc/comments/3ygo96/blocksize_consensus_census/cye0bmt/
[2] https://www.youtube.com/watch?v=ivgxcEOyWNs&feature=youtu.be&t=2h36m20s
[3] https://www.bitcoinblockhalf.com/
[4] https://petertodd.org/2016/are-wallets-ready-for-rbf
[5] https://www.ccn.com/bitcoin-atm-double-spenders-police-need-help-identifying-four-criminals/
[6] https://bitcointalk.org/index.php?topic=4905430.0
[7]https://www.trustnodes.com/2018/03/26/lightning-network-user-loses-funds || https://www.trustnodes.com/2019/03/13/lightning-network-has-many-routing-problems-says-lead-dev-at-lightning-labs
[8] https://diar.co/volume-2-issue-25/
[9] https://blockonomi.com/watchtowers-bitcoin-lightning-network/
[10] https://twitter.com/starkness/status/676599570898419712
[11] https://store.casa/lightning-node/
[12] https://bitcoin.stackexchange.com/questions/81906/to-create-a-channel-on-the-lightning-network-do-you-have-to-execute-an-actual-t
[13] https://blog.muun.com/the-inbound-capacity-problem-in-the-lightning-network/
[14] https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e
[15] https://dashnews.org/peter-todd-argues-for-bitcoin-inflation-to-support-security/
[16] https://twitter.com/peterktodd/status/1092260891788103680
[17] https://medium.com/datadriveninvestotether-usd-is-used-to-manipulate-bitcoin-prices-94714e65ee31
[18] https://twitter.com/CryptoJetHammestatus/1149131155469455364
[19] https://www.bitrates.com/news/p/crypto-collusion-the-web-of-secrets-at-the-core-of-the-crypto-market
[20] https://archive.is/lk1lH
[21] https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=8W00ssb7x5ZOaj8HKFdbfQ==
[22] https://bitcointalk.org/index.php?topic=195.msg1611#msg1611
[23] https://github.com/bitcoin/bips/blob/mastebip-0101.mediawiki
[24] https://www.reddit.com/bitcoinxt/comments/3yewit/psa_if_youre_running_an_xt_node_in_stealth_mode/
[25] https://www.reddit.com/btc/comments/3yebzi/coinbase_down/
[26]https://bitcointalk.org/index.php?topic=532.msg6306#msg6306"
submitted by Knockout_SS to bitcoincashSV [link] [comments]

Breaking: Tiananmen Square Massacre of 1989 information strictly prohibited by the CHines Government has been embedded in the Bitcoin Blockchain. Buckle up folks, this may get interesting.

Yesterday I proposed a possible method to end the Chinese Bitcoin Mining Monopoly by embedding pro-freedom/anti-Chinese government tyranny information prohibited by the Chinese Government on the Bitcoin Blockchain. Original thread here: https://np.reddit.com/Bitcoin/comments/60apqg/a_proposal_for_a_simple_inexpensive_and_effective/
Here is the text embedded in the Bitcoin Blockchain:
中国:应公布六四屠杀真相 1989后打压人权在习近平任内达高峰
(纽约,2016年6月2日)-人权观察今天表示,中国政府应停止否认国家在1989年6月4日前后屠杀无武装民运人士和市民事件中的角色,承认政府应对于与镇压该示威活动有关的杀人、拘押和迫害行为负起责任。
Tiananmen Square, Beijing in June 1989. 展开 天安门,北京,1989年6月
中国政府应展现诚意,立即停止拘押和骚扰纪念「六四」人士,会见幸存者及其家属,并释放因追悼「六四」而自2014年7月被关押至今的维权人士于世文。
“中国当局应将其亏欠的正义与究责还给屠杀幸存者及其家属,” 人权观察中国部主任索菲・理查森(Sophie Richardson)说。“1989年迄今政治打压,不但未能遏止要求基本自由与负责政府的呼声,反而使中共的合法性加倍流失。” 和往年同样,
当局已在「六四」周年来临前提升戒备,严防出现悼念活动:
2016年5月28日,成都当局以煽动颠覆罪名拘捕符海陆,他被怀疑在社交媒体发布贴有与「六四」有关标签的酒瓶图片。 据维权网披露,另有至少四人因为纪念「六四」而被警方拘留,包括成都诗人马青和北京维权人士徐彩虹、赵长青、张宝成。 当局并将多名维权人士软禁或限制行动,包括天安门母亲发起人丁子霖和山东退休教授孙文广。 著名记者高瑜虽在2015年11月获准保外就医出狱,仍须在家中服完五年刑期;一直受到实质软禁的前中共高干鲍彤,则被强迫以「旅游」名义离开北京。 1989年至今,中国政府一直违背国内法和国际人权法义务,严格限制基本人权──特别是言论、集会、结社自由和参政权。然而,对异议人士的不容忍,自2013年3月习近平掌权后更达高峰。中国政府正研拟或已通过数项新的国家安全法律,加强对公民社会的限制和管控;互联网和媒体言论空间受到进一步紧缩;数百名维权人士遭到拘押和判刑;意见领袖和自由派知识分子被刻意起诉;同时,政府还大力推行党领导一切的“正确思想”。
虽然最后一位因参与八九民运入狱人士可望于2016年10月刑满释放,但有许多当年示威者出狱后继续从事维权活动而再被关押。1989年组织广州民运活动而坐牢18个月的于世文,即因悼念「六四」而于2014年被拘押至今。其他资深维权人士,包括诺贝尔和平奖得主刘晓波、四川维权人士刘贤斌、陈卫和广东维权人士郭飞雄,分别被判处重刑或以政治罪名遭到羁押。
中国当局应将其亏欠的正义与究责还给屠杀幸存者及其家属 理查森 中国部主任, 人权观察 当局防范「六四」议题的另一方式,是禁止屠杀后逃亡海外的八九民运组织者或参与者返国。例如,前学生领袖吾尔开希、熊焱至今归国无门,两人虽曾在2013到2014年屡次闯关,但均遭香港当局拒绝入境。
中国政府持续否认屠杀和平示威者,敌视和平的民众参与,与其他地方的发展形成强烈对比。在2016年5月的就职演说中,台湾新任总统蔡英文宣示将以成立真相与和解委员会的方式“面对过去”,俾能记取“那个时代的错误”──她指的应是所谓“白色恐怖”时期的政治迫害。缅甸经历50年军事独裁后,现在也已开始向选举民主转型。
背景:1989血腥镇压
天安门屠杀缘起于学生、工人和其他群众,为呼吁言论自由、责任政治和扫除腐败,于1989年4月在北京天安门广场及各大城市发起和平集会。随着示威活动日益扩大,政府在1989年5月下旬宣布戒严。
1989年6月3日到4日,军队开火杀害不明人数的和平示威者和旁观者。在北京,有部分市民为反击军方暴力而攻击运兵车队,焚烧交通工具。屠杀后,政府实施全国性镇压,以“反革命”和扰乱社会秩序、纵火等刑事罪名逮捕数千人。
中国政府从未承认对屠杀负有责任,也未曾将任何杀人凶手移送法办。它既拒绝对事件进行调查,也不愿公布关于死亡、受伤、失踪或服刑者的数据。主要由死难者家属组成的一个非政府组织,天安门母亲,收集了202名在北京和其他城市遭镇压死亡者的详细资料。27年过去了,许多天安门母亲成员身患病痛,部分已经去世,却未能见到正义伸张,也不知道他们的亲人究竟如何罹难。
人权观察呼吁中国政府把握「六四」27周年的机会,彻底改变官方对此事件的立场。具体而言,它应做到:
尊重言论、结社与和平集会自由权,停止骚扰及任意拘押质疑「六四」官方说法的人士; 与天安门母亲成员会面,并向他们道歉; 允许对「六四」事件进行独立、公开的调查,并尽速将结果公诸大众; 允许因「六四」流亡海外的中国公民自由返国;以及 调查所有参与策划或指挥非法利用致命武力对付和平示威者的官员和军官,并公布死难者名单。 “自1989年以来,中国在政治改革方面不仅毫无进展,反而在原地踏步甚至向后退却,”理查森说。“北京要想向前跃进,就必须正视过去的伤痛。这不但有其他怀抱自信政府的先例可循,也是全中国的民心所向。” “自1989年以来,中国在政治改革方面不仅毫无进展,反而在原地踏步甚至向后退却,”理查森说。“北京要想向前跃进,就必须正视过去的伤痛。这不但有其他怀抱自信政府的先例可循,也是全中国的民心所向。” 区域/国家 亚洲 中国和西藏 主题 言论自由
China: Tell the Truth About Tiananmen on Anniversary
Repression of Rights at Post-1989 Peak Under President Xi
(New York) – The Chinese government should cease its denial about the state’s role in the massacre of unarmed pro-democracy protesters and citizens around June 4, 1989, and acknowledge the government’s responsibility for the killings, detentions, and persecution associated with suppression of the protests, Human Rights Watch said today.
Tiananmen Square, Beijing in June 1989.
Beijing should demonstrate that commitment by immediately ceasing its detention and harassment of individuals marking the occasion, meeting with survivors and their family members, and releasing Yu Shiwen, an activist held since July 2014 for commemorating the massacre.
“Chinese authorities owe a debt of justice and accountability to survivors of the massacre and their family members,” said Sophie Richardson, China director. “Political repression since 1989 has not eliminated yearnings for basic freedoms and an accountable government – instead it has only compounded the Party’s lack of legitimacy.”
As in previous years, authorities have been on high alert ahead of the anniversary to preempt commemorations of the massacre:
In Chengdu on May 28, 2016, authorities detained Fu Hailu on subversion charges; he is suspected of posting on social media images of liquor bottles with labels related to the crackdown. At least four others – poet Ma Qing in Chengdu, and activists Xu Caihong, Zhao Changqing, and Zhang Baocheng in Beijing – are believed to be in police custody for commemorating the occasion, according to the nongovernmental organization Chinese Human Rights Defenders. Authorities have also put under house arrest or restricted the movement of a number of activists, including Ding Zilin, a founding member of the Tiananmen Mothers, and retired Shandong professor Sun Wenguang. Prominent journalist Gao Yu, who in November 2015 was released from prison on medical parole to serve out her five year sentence at home, and former top official Bao Tong, who remains under effective house arrest, have been required to leave Beijing for enforced “vacations.” Since 1989, the Chinese government has kept tight control over basic human rights – particularly freedoms of expression, assembly, and association, and the right to political participation – despite its obligations under domestic and international human rights law. Intolerance toward dissent, however, has reached a new peak since President Xi Jinping came to power in March 2013. The government has drafted or promulgated new state security laws that put in place more restrictive controls over civil society; further curtailed expression on the Internet and media; detained and imprisoned hundreds of activists in successive waves of arrests; targeted for prosecution public opinion leaders and liberal thinkers; and aggressively promoted the “correct ideology” of Party supremacy.
While the last individual known to be imprisoned for his involvement in the 1989 protests will be released in October 2016, many who were involved in the demonstrations and who continued their activism after their release have been re-incarcerated. Yu Shiwen, who spent 18 months in prison for his 1989 work organizing pro-democracy efforts in Guangzhou, has been detained since 2014 for commemorating the massacre that year. Other veteran activists, including Nobel Peace Prize winner Liu Xiaobo, Sichuan activists Liu Xianbin and Chen Wei, and Guangdong activist Guo Feixiong are either serving long prison sentences or have been detained on political charges.
Chinese authorities owe a debt of justice and accountability to survivors of the massacre and their family members.
Authorities have also prevented discussions about the massacre by blocking organizers of, or participants in, the 1989 protests from returning from other countries where they sought refuge in the aftermath of the massacre. Former student leaders Wuer Kaixi and Xiong Yan, for example, have been unable to re-enter China. Their repeated attempts to return in 2013 and 2014 were rejected by Hong Kong authorities.
The Chinese government’s continued denial of the massacre of protesters and hostility toward peaceful political participation contrast sharply with developments elsewhere. In her May 2016 inaugural address, Tsai Ing-wen, Taiwan’s new president, vowed to “face the past” by setting up a new Truth and Reconciliation Commission to investigate “mistakes” of “the era” – which likely refers to the period of political repression known as the White Terror. After five decades of military dictatorship, Burma has begun a transition to electoral democracy.
Background: Bloodshed in 1989 The Tiananmen massacre was precipitated by the peaceful gatherings of students, workers, and others in Beijing’s Tiananmen Square and other cities in April 1989 calling for freedom of expression, accountability, and an end to corruption. The government responded to the intensifying protests in late May 1989 by declaring martial law.
On June 3 and 4, the military opened fire and killed untold numbers of peaceful protesters and bystanders. In Beijing, some citizens attacked army convoys and burned vehicles in response to the military’s violence. Following the killings, the government implemented a national crackdown and arrested thousands of people for “counter-revolution” and other criminal charges, including disrupting social order and arson.
The government has never accepted responsibility for the massacre or held any perpetrators legally accountable for the killings. It has refused to conduct an investigation into the events or release data on those who were killed, injured, disappeared, or imprisoned. The nongovernmental organization Tiananmen Mothers, consisting mostly of family members of those killed, has established the details of 202 people who were killed during the suppression of the movement in Beijing and other cities. Twenty-seven years on, many members of the Tiananmen Mothers are ailing and some have died without seeing justice or knowing precisely what has happened to their family members.
Human Rights Watch called on the Chinese government to use the opportunity of the 27th anniversary of June 4, 1989, to reverse its current position on the event. Specifically, it should:
Respect the rights to freedom of expression, association, and peaceful assembly and cease the harassment and arbitrary detention of individuals who challenge the official account of June 4; Meet with and apologize to members of the Tiananmen Mothers; Permit an independent public inquiry into June 4, and promptly release its findings and conclusions to the public; Allow the unimpeded return of Chinese citizens exiled due to their connections to the events of 1989; and Investigate all government and military officials who planned or ordered the unlawful use of lethal force against peaceful demonstrators, and publish the names of all those who died. “Instead of advancing, China has stagnated, and even regressed, in terms of political reforms since 1989,” Richardson said. “Beijing can only move forward by facing up to its painful past, as others have had the confidence to do, and as people across China clearly want.”
At the very least, this is proving the concept of information delivery and storage utilizing the Bitcoin Blockchain bypassing international borders and laws.
You can verify that this has in fact been embedded in the Bitcoin Blockchain yourself here: http://www.cryptograffiti.info/#4901
What reaction do you think the tyrannical Chinese Government will have to this information being distributed within China by Bitcoin Miners hosting nodes inside of the Great Firewall of China? If this continues, at some point they will certainly take action to close this avenue of Freedom of Speech. Will they force the Miners to adopt a fork which rolls back the Blockchain to scrub this prohibited information from the Bitcoin Blockchain, and thus effectively create a new altcoin? Who will follow this new blockchain, and who will follow the original?
submitted by Barkey_McButtstain to Bitcoin [link] [comments]

Greg Maxwell /u/nullc (CTO of Blockstream) has sent me two private messages in response to my other post today (where I said "Chinese miners can only win big by following the market - not by following Core/Blockstream."). In response to his private messages, I am publicly posting my reply, here:

Note:
Greg Maxell nullc sent me 2 short private messages criticizing me today. For whatever reason, he seems to prefer messaging me privately these days, rather than responding publicly on these forums.
Without asking him for permission to publish his private messages, I do think it should be fine for me to respond to them publicly here - only quoting 3 phrases from them, namely: "340GB", "paid off", and "integrity" LOL.
There was nothing particularly new or revealing in his messages - just more of the same stuff we've all heard before. I have no idea why he prefers responding to me privately these days.
Everything below is written by me - I haven't tried to upload his 2 PMs to me, since he didn't give permission (and I didn't ask). The only stuff below from his 2 PMs is the 3 phrases already mentioned: "340GB", "paid off", and "integrity". The rest of this long wall of text is just my "open letter to Greg."
TL;DR: The code that maximally uses the available hardware and infrastructure will win - and there is nothing Core/Blockstream can do to stop that. Also, things like the Berlin Wall or the Soviet Union lasted for a lot longer than people expected - but, conversely, the also got swept away a lot faster than anyone expected. The "vote" for bigger blocks is an ongoing referendum - and Classic is running on 20-25% of the network (and can and will jump up to the needed 75% very fast, when investors demand it due to the inevitable "congestion crisis") - which must be a massive worry for Greg/Adam/Austin and their backers from the Bilderberg Group. The debate will inevitably be decided in favor of bigger blocks - simply because the market demands it, and the hardware / infrastructure supports it.
Hello Greg Maxwell nullc (CTO of Blockstream) -
Thank you for your private messages in response to my post.
I respect (most of) your work on Bitcoin, but I think you were wrong on several major points in your messages, and in your overall economic approach to Bitcoin - as I explain in greater detail below:
Correcting some inappropriate terminology you used
As everybody knows, Classic or Unlimited or Adaptive (all of which I did mention specifically in my post) do not support "340GB" blocks (which I did not mention in my post).
It is therefore a straw-man for you to claim that big-block supporters want "340GB" blocks. Craig Wright may want that - but nobody else supports his crazy posturing and ridiculous ideas.
You should know that what actual users / investors (and Satoshi) actually do want, is to let the market and the infrastructure decide on the size of actual blocks - which could be around 2 MB, or 4 MB, etc. - gradually growing in accordance with market needs and infrastructure capabilities (free from any arbitrary, artificial central planning and obstructionism on the part of Core/Blockstream, and its investors - many of whom have a vested interest in maintaining the current debt-backed fiat system).
You yourself (nullc) once said somewhere that bigger blocks would probably be fine - ie, they would not pose a decentralization risk. (I can't find the link now - maybe I'll have time to look for it later.) I found the link:
https://np.reddit.com/btc/comments/43mond/even_a_year_ago_i_said_i_though_we_could_probably/
I am also surprised that you now seem to be among those making unfounded insinuations that posters such as myself must somehow be "paid off" - as if intelligent observers and participants could not decide on their own, based on the empirical evidence, that bigger blocks are needed, when the network is obviously becoming congested and additional infrastructure is obviously available.
Random posters on Reddit might say and believe such conspiratorial nonsense - but I had always thought that you, given your intellectual abilities, would have been able to determine that people like me are able to arrive at supporting bigger blocks quite entirely on our own, based on two simple empirical facts, ie:
  • the infrastructure supports bigger blocks now;
  • the market needs bigger blocks now.
In the present case, I will simply assume that you might be having a bad day, for you to erroneously and groundlessly insinuate that I must be "paid off" in order to support bigger blocks.
Using Occam's Razor
The much simpler explanation is that bigger-block supporters believe will get "paid off" from bigger gains for their investment in Bitcoin.
Rational investors and users understand that bigger blocks are necessary, based on the apparent correlation (not necessarily causation!) between volume and price (as mentioned in my other post, and backed up with graphs).
And rational network capacity planners (a group which you should be in - but for some mysterious reason, you're not) also understand that bigger blocks are necessary, and quite feasible (and do not pose any undue "centralization risk".)
As I have been on the record for months publicly stating, I understand that bigger blocks are necessary based on the following two objective, rational reasons:
  • because I've seen the graphs; and
  • because I've seen the empirical research in the field (from guys like Gavin and Toomim) showing that the network infrastructure (primarily bandwidth and latency - but also RAM and CPU) would also support bigger blocks now (I believe they showed that 3-4MB blocks would definitely work fine on the network now - possibly even 8 MB - without causing undue centralization).
Bigger-block supporters are being objective; smaller-block supporters are not
I am surprised that you no longer talk about this debate in those kind of objective terms:
  • bandwidth, latency (including Great Firewall of China), RAM, CPU;
  • centralization risk
Those are really the only considerations which we should be discussing in this debate - because those are the only rational considerations which might justify the argument for keeping 1 MB.
And yet you, and Adam Back adam3us, and your company Blockstream (financed by the Bilderberg Group, which has significant overlap with central banks and the legacy, debt-based, violence-backed fiat money system that has been running and slowing destroying our world) never make such objective, technical arguments anymore.
And when you make unfounded conspiratorial, insulting insinuations saying people who disagree with you on the facts must somehow be "paid off", then you are now talking like some "nobody" on Reddit - making wild baseless accusations that people must be "paid off" to support bigger blocks, something I had always thought was "beneath" you.
Instead, Occams's Razor suggests that people who support bigger blocks are merely doing so out of:
  • simple, rational investment policy; and
  • simple, rational capacity planning.
At this point, the burden is on guys like you (nullc) to explain why you support a so-called scaling "roadmap" which is not aligned with:
  • simple, rational investment policy; and
  • simple, rational capacity planning
The burden is also on guys like you to show that you do not have a conflict of interest, due to Blockstream's highly-publicized connections (via insurance giant AXA - whose CED is also the Chairman of the Bilderberg Group; and companies such as the "Big 4" accounting firm PwC) to the global cartel of debt-based central banks with their infinite money-printing.
In a nutshell, the argument of big-block supporters is simple:
If the hardware / network infrastructure supports bigger blocks (and it does), and if the market demands it (and it does), then we certainly should use bigger blocks - now.
You have never provided a counter-argument to this simple, rational proposition - for the past few years.
If you have actual numbers or evidence or facts or even legitimate concerns (regarding "centralization risk" - presumably your only argument) then you should show such evidence.
But you never have. So we can only assume either incompetence or malfeasance on your part.
As I have also publicly and privately stated to you many times, with the utmost of sincerity: We do of course appreciate the wealth of stellar coding skills which you bring to Bitcoin's cryptographic and networking aspects.
But we do not appreciate the obstructionism and centralization which you also bring to Bitcoin's economic and scaling aspects.
Bitcoin is bigger than you.
The simple reality is this: If you can't / won't let Bitcoin grow naturally, then the market is going to eventually route around you, and billions (eventually trillions) of investor capital and user payments will naturally flow elsewhere.
So: You can either be the guy who wrote the software to provide simple and safe Bitcoin scaling (while maintaining "reasonable" decentralization) - or the guy who didn't.
The choice is yours.
The market, and history, don't really care about:
  • which "side" you (nullc) might be on, or
  • whether you yourself might have been "paid off" (or under a non-disclosure agreement written perhaps by some investors associated the Bilderberg Group and the legacy debt-based fiat money system which they support), or
  • whether or not you might be clueless about economics.
Crypto and/or Bitcoin will move on - with or without you and your obstructionism.
Bigger-block supporters, including myself, are impartial
By the way, my two recent posts this past week on the Craig Wright extravaganza...
...should have given you some indication that I am being impartial and objective, and I do have "integrity" (and I am not "paid off" by anybody, as you so insultingly insinuated).
In other words, much like the market and investors, I don't care who provides bigger blocks - whether it would be Core/Blockstream, or Bitcoin Classic, or (the perhaps confusingly-named) "Bitcoin Unlimited" (which isn't necessarily about some kind of "unlimited" blocksize, but rather simply about liberating users and miners from being "limited" by controls imposed by any centralized group of developers, such as Core/Blockstream and the Bilderbergers who fund you).
So, it should be clear by now I don't care one way or the other about Gavin personally - or about you, or about any other coders.
I care about code, and arguments - regardless of who is providing such things - eg:
  • When Gavin didn't demand crypto proof from Craig, and you said you would have: I publicly criticized Gavin - and I supported you.
  • When you continue to impose needless obstactles to bigger blocks, then I continue to criticize you.
In other words, as we all know, it's not about the people.
It's about the code - and what the market wants, and what the infrastructure will bear.
You of all people should know that that's how these things should be decided.
Fortunately, we can take what we need, and throw away the rest.
Your crypto/networking expertise is appreciated; your dictating of economic parameters is not.
As I have also repeatedly stated in the past, I pretty much support everything coming from you, nullc:
  • your crypto and networking and game-theoretical expertise,
  • your extremely important work on Confidential Transactions / homomorphic encryption.
  • your desire to keep Bitcoin decentralized.
And I (and the network, and the market/investors) will always thank you profusely and quite sincerely for these massive contributions which you make.
But open-source code is (fortunately) à la carte. It's mix-and-match. We can use your crypto and networking code (which is great) - and we can reject your cripple-code (artificially small 1 MB blocks), throwing it where it belongs: in the garbage heap of history.
So I hope you see that I am being rational and objective about what I support (the code) - and that I am also always neutral and impartial regarding who may (or may not) provide it.
And by the way: Bitcoin is actually not as complicated as certain people make it out to be.
This is another point which might be lost on certain people, including:
And that point is this:
The crypto code behind Bitcoin actually is very simple.
And the networking code behind Bitcoin is actually also fairly simple as well.
Right now you may be feeling rather important and special, because you're part of the first wave of development of cryptocurrencies.
But if the cryptocurrency which you're coding (Core/Blockstream's version of Bitcoin, as funded by the Bilderberg Group) fails to deliver what investors want, then investors will dump you so fast your head will spin.
Investors care about money, not code.
So bigger blocks will eventually, inevitably come - simply because the market demand is there, and the infrastructure capacity is there.
It might be nice if bigger blocks would come from Core/Blockstream.
But who knows - it might actually be nicer (in terms of anti-fragility and decentralization of development) if bigger blocks were to come from someone other than Core/Blockstream.
So I'm really not begging you - I'm warning you, for your own benefit (your reputation and place in history), that:
Either way, we are going to get bigger blocks.
Simply because the market wants them, and the hardware / infrastructre can provide them.
And there is nothing you can do to stop us.
So the market will inevitably adopt bigger blocks either with or without you guys - given that the crypto and networking tech behind Bitcoin is not all that complex, and it's open-source, and there is massive pent-up investor demand for cryptocurrency - to the tune of multiple billions (or eventually trillions) of dollars.
It ain't over till the fat lady sings.
Regarding the "success" which certain small-block supports are (prematurely) gloating about, during this time when a hard-fork has not happened yet: they should bear in mind that the market has only begun to speak.
And the first thing it did when it spoke was to dump about 20-25% of Core/Blockstream nodes in a matter of weeks. (And the next thing it did was Gemini added Ethereum trading.)
So a sizable percentage of nodes are already using Classic. Despite desperate, irrelevant attempts of certain posters on these forums to "spin" the current situation as a "win" for Core - it is actually a major "fail" for Core.
Because if Core/Blocksteam were not "blocking" Bitcoin's natural, organic growth with that crappy little line of temporary anti-spam kludge-code which you and your minions have refused to delete despite Satoshi explicitly telling you to back in 2010 ("MAX_BLOCKSIZE = 1000000"), then there would be something close to 0% nodes running Classic - not 25% (and many more addable at the drop of a hat).
This vote is ongoing.
This "voting" is not like a normal vote in a national election, which is over in one day.
Unfortunately for Core/Blockstream, the "voting" for Classic and against Core is actually two-year-long referendum.
It is still ongoing, and it can rapidly swing in favor of Classic at any time between now and Classic's install-by date (around January 1, 2018 I believe) - at any point when the market decides that it needs and wants bigger blocks (ie, due to a congestion crisis).
You know this, Adam Back knows this, Austin Hill knows this, and some of your brainwashed supporters on censored forums probably know this too.
This is probably the main reason why you're all so freaked out and feel the need to even respond to us unwashed bigger-block supporters, instead of simply ignoring us.
This is probably the main reason why Adam Back feels the need to keep flying around the world, holding meetings with miners, making PowerPoint presentations in English and Chinese, and possibly also making secret deals behind the scenes.
This is also why Theymos feels the need to censor.
And this is perhaps also why your brainwashed supporters from censored forums feel the need to constantly make their juvenile, content-free, drive-by comments (and perhaps also why you evidently feel the need to privately message me your own comments now).
Because, once again, for the umpteenth time in years, you've seen that we are not going away.
Every day you get another worrisome, painful reminder from us that Classic is still running on 25% of "your" network.
And everyday get another worrisome, painful reminder that Classic could easily jump to 75% in a matter of days - as soon as investors see their $7 billion wealth starting to evaporate when the network goes into a congestion crisis due to your obstructionism and insistence on artificially small 1 MB blocks.
If your code were good enough to stand on its own, then all of Core's globetrotting and campaigning and censorship would be necessary.
But you know, and everyone else knows, that your cripple-code does not include simple and safe scaling - and the competing code (Classic, Unlimited) does.
So your code cannot stand on its own - and that's why you and your supporters feel that it's necessary to keep up the censorship and and the lies and the snark. It's shameful that a smart coder like you would be involved with such tactics.
Oppressive regimes always last longer than everyone expects - but they also also collapse faster than anyone expects.
We already have interesting historical precedents showing how grassroots resistance to centralized oppression and obstructionism tends to work out in the end. The phenomenon is two-fold:
  • The oppression usually drags on much longer than anyone expects; and
  • The liberation usually happens quite abruptly - much faster than anyone expects.
The Berlin Wall stayed up much longer than everyone expected - but it also came tumbling down much faster than everyone expected.
Examples of opporessive regimes that held on surprisingly long, and collapsed surpisingly fast, are rather common - eg, the collapse of the Berlin Wall, or the collapse of the Soviet Union.
(Both examples are actually quite germane to the case of Blockstream/Core/Theymos - as those despotic regimes were also held together by the fragile chewing gum and paper clips of denialism and censorship, and the brainwashed but ultimately complacent and fragile yes-men that inevitably arise in such an environment.)
The Berlin Wall did indeed seem like it would never come down. But the grassroots resistance against it was always there, in the wings, chipping away at the oppression, trying to break free.
And then when it did come down, it happened in a matter of days - much faster than anyone had expected.
That's generally how these things tend to go:
  • oppression and obstructionism drag on forever, and the people oppressing freedom and progress erroneously believe that Core/Blockstream is "winning" (in this case: Blockstream/Core and you and Adam and Austin - and the clueless yes-men on censored forums like r\bitcoin who mindlessly support you, and the obedient Chinese miners who, thus far, have apparently been to polite to oppose you) ;
  • then one fine day, the market (or society) mysteriously and abruptly decides one day that "enough is enough" - and the tsunami comes in and washes the oppressors away in the blink of an eye.
So all these non-entities with their drive-by comments on these threads and their premature gloating and triumphalism are irrelevant in the long term.
The only thing that really matters is investors and users - who are continually applying grassroots pressure on the network, demanding increased capacity to keep the transactions flowing (and the price rising).
And then one day: the Berlin Wall comes tumbling down - or in the case of Bitcoin: a bunch of mining pools have to switch to Classic, and they will do switch so fast it will make your head spin.
Because there will be an emergency congestion crisis where the network is causing the price to crash and threatening to destroy $7 billion in investor wealth.
So it is understandable that your supports might sometimes prematurely gloat, or you might feel the need to try to comment publicly or privately, or Adam might feel the need to jet around the world.
Because a large chunk of people have rejected your code.
And because many more can and will - and they'll do in the blink of an eye.
Classic is still out there, "waiting in the wings", ready to be installed, whenever the investors tell the miners that it is needed.
Fortunately for big-block supporters, in this "election", the polls don't stay open for just one day, like in national elections.
The voting for Classic is on-going - it runs for two years. It is happening now, and it will continue to happen until around January 1, 2018 (which is when Classic-as-an-option has been set to officially "expire").
To make a weird comparison with American presidential politics: It's kinda like if either Hillary or Trump were already in office - but meanwhile there was also an ongoing election (where people could change their votes as often as they want), and the day when people got fed up with the incompetent incumbent, they can throw them out (and install someone like Bernie instead) in the blink of an eye.
So while the inertia does favor the incumbent (because people are lazy: it takes them a while to become informed, or fed up, or panicked), this kind of long-running, basically never-ending election favors the insurgent (because once the incumbent visibly screws up, the insurgent gets adopted - permanently).
Everyone knows that Satoshi explicitly defined Bitcoin to be a voting system, in and of itself. Not only does the network vote on which valid block to append next to the chain - the network also votes on the very definition of what a "valid block" is.
Go ahead and re-read the anonymous PDF that was recently posted on the subject of how you are dangerously centralizing Bitcoin by trying to prevent any votes from taking place:
https://np.reddit.com/btc/comments/4hxlquhoh_a_warning_regarding_the_onset_of_centralised/
The insurgent (Classic, Unlimited) is right (they maximally use available bandwidth) - while the incumbent (Core) is wrong (it needlessly throws bandwidth out the window, choking the network, suppressing volume, and hurting the price).
And you, and Adam, and Austin Hill - and your funders from the Bilderberg Group - must be freaking out that there is no way you can get rid of Classic (due to the open-source nature of cryptocurrency and Bitcoin).
Cripple-code will always be rejected by the network.
Classic is already running on about 20%-25% of nodes, and there is nothing you can do to stop it - except commenting on these threads, or having guys like Adam flying around the world doing PowerPoints, etc.
Everything you do is irrelevant when compared against billions of dollars in current wealth (and possibly trillions more down the road) which needs and wants and will get bigger blocks.
You guys no longer even make technical arguments against bigger blocks - because there are none: Classic's codebase is 99% the same as Core, except with bigger blocks.
So when we do finally get bigger blocks, we will get them very, very fast: because it only takes a few hours to upgrade the software to keep all the good crypto and networking code that Core/Blockstream wrote - while tossing that single line of 1 MB "max blocksize" cripple-code from Core/Blockstream into the dustbin of history - just like people did with the Berlin Wall.
submitted by ydtm to btc [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Core/Blockstream is living in a fantasy world. In the real world everyone knows (1) our hardware can support 4-8 MB (even with the Great Firewall), and (2) hard forks are cleaner than soft forks. Core/Blockstream refuses to offer either of these things. Other implementations (eg: BU) can offer both.

Core/Blockstream is living in a fantasy world. In the real world everyone knows (1) our hardware can support 4-8 MB (even with the Great Firewall), and (2) hard forks are cleaner than soft forks. Core/Blockstream refuses to offer either of these things. Other implementations (eg: BU) can offer both.
It's not even mainly about the blocksize.
There's actually several things that need to be upgraded in Bitcoin right now - malleability, quadratic verification time - in addition to the blocksize which could be 4-8 megs right now as everyone has been saying for years.
The network is suffering congestion, delays and unpredictable delivery this week - because of 1 MB blocks - which is all Core/Blockstream's fault.
Chinese miner Jiang Zhuo'er published a post today where once again we hear that people's hardware and infrastructure would already support 4-8 MB blocks (including the Great Firewall of China) - if only our software could "somehow" be upgraded to suport 4-8 MB blocks.
https://np.reddit.com/btc/comments/5eh2cc/why_against_segwit_and_core_jiang_zhuoer_who/
https://np.reddit.com/Bitcoin/comments/5egroc/why_against_segwit_and_core_jiang_zhuoer_who/
Bigger blocks would avoid the congestion we're seeing this week - and would probably also cause a much higher price.
The main reason we don't have 4-8 MB blocks right now is Core/Blockstream's fault. (And also, as people are now realizing: it's everyone's fault, for continuing to listen to Core/Blockstream, after all their failures.)
Much more complex changes have been rolled out in other coins, with no problems whatsoever. Code on other projects gets upgraded all the time, and Satoshi expected Bitcoin's code to get upgraded too. But Core/Blockstream don't want to upgrade.
Coins can upgrade as long as they maintain their "meta-rules"
Everyone has a fairly clear intuition of what a coin's "meta-rules" are, and in the case of Bitcoin these include:
Note that "1 MB max blocksize" is not a meta-rule of Bitcoin. It was a temporary anti-spam measure, mentioned nowhere in the original descriptions, and it was supposed to be eliminated long ago.
Blocksizes have always increased, and people intuitively understand that we should get the most we can out of our hardware and infrastructure - which would support 4-8 MB blocks now, if only some dev team would provide that code.
Core/Blockstream, for their own mysterious reasons, refuse to provide that code. But that is their problem - not our problem.
It's not rocket science, and we're not dependent on Core/Blockstream
Much of the "rocket science" of Bitcoin was already done by Satoshi, and further incremental improvements have been added since.
Increasing the blocksize is a relatively simple improvement, and it can be done by many, many other dev teams aside from Core/Blockstream - such as BU, which proposes a novel approach offering configuration settings allowing the market to collaboratively determine the blocksize, evolving over time.
We should also recall that BitPay also proposed another solution, based on a robust statistic using the median of previous blocksizes.
One important characteristic about both these proposals is that they make the blocksize configurable - ie, you don't need to do additional upgrades later. This is a serious disadvantage of SegWit - which is really rather primitive in its proposed blocksize approach - ie, it once-again proposes some "centrally planned", "hard-coded" numbers.
After all the mess of the past few years of debate, "centrally planned hard-coded blocksize numbers" everyone now knows that are ridiculous. But this is what we get from the "experts" at Core/Blockstream.
And meanwhile, once again, this week the network is suffering congestion, delays and unpredictable delivery - because Core/Blockstream are too paralyzed and myopic and arrogant to provide the kind of upgrade we've been asking for.
Instead, they have wimped out and offered merely a "soft fork" with almost no immediate capacity increase at all - in other words, an insulting and messy hack.
This is why Core/Blockstream's SegWit-as-a-spaghetti-code-soft-fork-with-almost-no-immediate-capacity-increase will probably get rejected by the community - because it's too little, too late, and in the wrong package.
Engineering isn't the only consideration
There are considerations involving economics and politics as well, which any Bitcoin dev team must take into account when deciding how to package and deploy the code improvements they offer to users - and on this level, Core/Blockstream has failed miserably.
They have basically ignored the fact that many people are already dependent for their economic livelihood on the $12 billion market cap in the blockchain flowing smoothly.
And they also ignored the fact that people don't like to be patronized / condescended to / dictated to.
Core/Blockstream did not properly take these considerations into account - so if their current SegWit-as-a-spaghetti-code-soft-fork-with-almost-no-immediate-capacity-increase offering gets rejected, then it's all their fault.
Core/Blockstream hates hard forks
Core/Blockstream have an extreme aversion to what they pejoratively call "hard forks" (which Bitcoin Unlimited developer Thomas Zander u/ThomasZander correctly pointed out should be called by the neutral terminology "protocol upgrades").
Core/Blockstream seem to be worried - perhaps rightfully so - that any installation of new software on the network would necessarily constitute "full node referendum" which might dislodge Core/Blockstream from their position as "incumbents". But, again, that's their problem, not ours. Bitcoin was always intended to be upgraded by a "full node referendum" - regardless of whether that might unseat any currently "incumbent" dev team which had failed to offer the best code for the network.
https://np.reddit.com/btc/search?q=blockstream+hard+fork&restrict_sr=on
Insisting on "soft forks" and "small blocks" means that Core/Blockstream's will always be inferior.
Core/Blockstream's aversion to "hard forks" (aka "protocol upgrades") will always have horrible consequences for their code quality.
Blockstream is required (by law) to serve their investment team, whose lead investors include legacy "fantasy fiat" finance firms such as AXA
This means that Blockstream is not required (by law) to serve the Bitcoin community - they might, or they might not. And they might, or might not, even tell us what their actual goals are.
Their corporate owners want soft forks (to avoid the possibility of another dev team coming to prominence), and they want small blocks (which they believe will support their proposed off-chain solutions such as LN - which may never even be released, and will probably be centralized if it is ever released).
This simply conflicts with the need of the Bitcoin community. Which is the main reason why Blockstream is probably doomed - they are legally required to not serve their investors, not the Bitcoin community.
If we're installing new code, we might as well do a hard fork
There's around 5,000 - 6,000 nodes on the network. If Core/Blockstream expected 95% of them to upgrade to SegWit-as-a-soft-fork, then with such a high adoption level, they might as well have done it as a much cleaner hard fork anyways. But they didn't - because they don't prioritize our needs, they prioritize the needs of their investors.
So instead of offering an upgrade offering the features we wanted (including on-chain scaling), implemented the way we wanted (as a hard fork) - they offered us everything we didn't want: a messy spaghetti-code soft fork, which doesn't even include the features we've been clamoring about for years (and which the congested network actually needs right now, this week).
Core/Blockstream has betrayed the early promise of SegWit - losing many of its early supporters, including myself
Remember, the main purpose of SegWit was to be a code cleanup / refactoring. And you do not do a code cleanup / refactoring by introducing more spaghetti code just because devs are afraid of "full node referendums" where they might lose "power".
Instead, devs should be honest, and actually serve the needs of community, by giving us the features we want, packaged the way we want them.
As noted in the link in the section title above, I myself was an outspoken supporter championing SegWit on the day when I first the YouTube of Pieter Wuille explaining it at one of the early "Scaling Bitcoin" conferences.
Then I found out that doing it as a soft fork would add unnecessary "spaghetti code" - and I became one of the most outspoken opponents of SegWit.
By the way, it must have been especially humiliating for a talented programmer Pieter Wuille like to have to contort SegWit into the "spaghetti-code soft fork" proposed by a mediocre programmer like Luke-Jr. Another tragic Bitcoin farce brought to you by Blockstream - maybe someday we'll get to hear all the juicy, dreary details.
Dev teams that don't listen to their users... get fired
We told Core/Blockstream time and time again that we're not against SegWit or LN per se - we simply also want to:
This was expressed again, most emphatically, at the Hong Kong meeting, where some Core/Blockstream-associated devs seemed to make some commitments to give users what we wanted. But later they dishonored those commitments anyways, and used fuzzy language to deny that they had ever even made them - further losing the confidence of the users.
Any dev team has to earn the support of the users, and Core/Blockstream (despite all their financial backing, despite having recruited such a large number of devs, despite having inherited the original code base) is steadily losing that support - because they have not given people what we asked for, and they have not compromised one inch on very simple issues - and to top it off, they have been dishonest.
They have also tried to dictate to the users - and users don't like this. Some users might not know coding - but others do. One example is ViaBTC - who is running a very big mining pool, with a very fast relay network, and also offering cloud mining - and emphatically rejecting the crippled code from Core/Blockstream. Instead of running Core/Blockstream's inferior crippled code, ViaBTC runs Bitcoin Unlimited.
This was all avoidable
Just think for a minute how easy it would have been for Core/Blockstream to package their offering more attractively - by including 4 MB blocks for example, and by doing SegWit as a hard fork. Totally doable - and it would have kept everyone happy - avoiding congestion on the network for several more years, while also paving the way for their dreams of LN - and also leaving Core/Blockstream "in power".
But instead, Core/Blockstream stupidly and arrogantly refused to listen or cooperate or compromise with the users. And now the network is congested, and it is unclear whether users will adopt Core/Blockstream's too-little too-late offering of SegWit-as-a-spaghetti-code-soft-fork-with-almost-no-immediate-capacity-increase.
So the current problems are all Core/Blockstream's fault - but also everyone's fault, for continuing to listen to Core/Blockstream.
The best solution now is to reject Core/Blockstream's inferior roadmap, and consider a roadmap from some other dev team (such as BU).
submitted by ydtm to btc [link] [comments]

According to the WSJ, regulators have decided on a "comprehensive ban on channels for the buying or selling of the virtual currency in China"

Here is the link to the original WSJ article.
Full Article: (Thanks knight222)
BEIJING—Chinese authorities are moving toward a broad clampdown on bitcoin trading, testing the resilience of the virtual currency as well as the idea its decentralized nature protects it from government interference. Regulators have decided on a comprehensive ban on channels for the buying or selling of the virtual currency in China that goes beyond plans to shut commercial bitcoin exchanges, according to people familiar with the matter.
Officials communicated the message to several industry executives at a closed-door meeting in Beijing on Friday, according to people who were at the meeting. Until last week, many entrepreneurs in China’s bitcoin circles had thought authorities might shut down only commercial trading activity while tolerating peer-to-peer, or over-the-counter, bitcoin platforms, which enable buyers and sellers to find each other and trade directly.
The Chinese plan represents some of the most draconian measures any government has taken to control bitcoin, created by an anonymous programmer nearly a decade ago as an alternative to official currencies, and word of it sent another wave of anxiety through the Chinese bitcoin community. China has digitized its financial sector faster than any other nation. Authorities continue to support the trend, though their public comments also suggest concern bitcoin could weaken official control of the country’s money supply. The crackdown on the bitcoin ecosystem represents Beijing’s possibly biggest effort so far to limit expansion of a system to rival the yuan. In a previous crackdown, in 2009, the central bank banned the use of tokens valued at billions of dollars created in China’s massive online-gaming networks for real-world purchases.
Bit of Uncertainty China’s clampdown on bitcoin has hurt global prices and domestic trading volumes; for now the country remains a major center for bitcoin mining.
[picture] Mining in 2017* Share by country Daily trading Share by currency Bitcoin price $5,000 100 % Others Others China Yen 35.3% 64.7% Euro 4,000 75 3,000 50 Dollar 2,000 25 Yuan 1,000 0 A J S ’17 2016 *As of Aug. 31 Sources: coindesk (price); bitcoinity.org (trading volume, mining)
A quasiregulatory body called the National Internet Finance Association of China (NIFA) warned investors about virtual currency trading in a statement last week and said that bitcoin platforms lack “legal basis” to operate in the country.
A goal of China’s monetary regulation is to ensure that “the source and destination of every piece of money can be tracked,” Li Lihui, a NIFA official told a technology conference in Shanghai on Friday. A lack of clarity from regulators has fueled worries about how far the government will go. One uncertainty, for example, is whether the ban will affect bitcoin deals made over social-messaging apps such as WeChat . People in the industry say a wave of bitcoin users in recent days migrated from WeChat to the encrypted messaging service Telegram.
A broader clampdown will likely include blocking mainland access to websites of foreign bitcoin exchanges such as Coinbase in the U.S. and Bitfinex in Hong Kong, say people familiar with the matter. Last weekend, the largest domestic bitcoin exchanges—BTCC, Huobi and OKCoin—all said they would halt trading services in the coming weeks, sending prices of bitcoin on the global market tumbling. Bitcoin traded at $3,947 apiece on Monday evening in Beijing, roughly 26% off its high of $4,960.72 on Sept. 1. Industry advocates hail bitcoin for allowing users to transact with each other without the involvement of a central authority. In reality, users access the market for virtual currencies via services and businesses that are centralized in real locations and therefore are susceptible to third parties. Any attempt by China to interfere broadly in the bitcoin network would test that notion further.
On the flip side, if bitcoin does prove resilient, China could be shutting itself out of a growing global market. As recently as last year, China accounted for the bulk of global bitcoin trading activity, but its share has dropped dramatically since the government started attempting to cool the market. China now accounts for less than 15% of bitcoin trading volume.
Blocking overseas exchange sites would add them to a long list of websites Beijing considers too sensitive, including Google and Facebook.
Chinese authorities haven’t made public their stance on virtual currency trading. The People’s Bank of China and the Ministry of Internet and Information Technology didn’t respond to requests for comment on bitcoin measures.
A document passed around at Friday’s meeting and reviewed by The Wall Street Journal instructs Beijing-based exchanges to unwind their operations and provide information on bank accounts used for clients’ deposits by Wednesday.
While China’s sway in bitcoin trading volumes has faded, the country remains a major creator of new bitcoin through a process called mining. Chinese bitcoin miners operate a vast collection of computers for the purpose in remote areas like northwestern Xinjiang, where they can access electricity for cheap. Until now, Chinese miners considered themselves immune from Beijing’s evolving stance on bitcoin trading. One entrepreneur said miners are now worried about authorities moving to limit their operations. “Using VPNs as a workaround will be difficult,” he said, referring to virtual private networks that allow users to circumvent China’s so-called Great Firewall.
Chinese miners loom large in the global bitcoin mining network, also serving an important role in the upkeep of the bitcoin ledger. Potential interference in how they connect to and use the internet could disrupt, at least temporarily, both the creation of new bitcoin and the speed at which global bitcoin transactions are confirmed, say people in the industry. The stepped-up tightening by regulators comes as China’s top leaders have been vocal about battling money laundering, in advance of an important leadership transition this fall. Last week, China’s State Council released guidelines aimed at better coordination between regulators to address the transfer of capital for illicit purposes.
—James T. Areddy in Shanghai and Liyan Qi in Beijing contributed to this article.
submitted by ILikeGreenit to btc [link] [comments]

The /r/btc China Dispatch: Episode 3 - Block Size, Chinese Miners and The Great Firewall

Good Sunday morning, /btc! The question of why Chinese miners don’t use a node outside of China to route around the Great Firewall of China (hereafter abbreviated as “the GFW”) and relay blocks more efficiently, a question with profound implications for any future block size proposal, has come up more than once over the last couple of days here, so for this episode I personally submitted the above question to one of China’s largest and most active bitcoin forums, 8btc.com and got some interesting responses that might surprise you.
For those of you who missed the last two episodes, you can catch up here and here. Also by popular request I will see if I can submit translations of the most upvoted comments here back to 8btc.com so we can establish an ongoing dialogue between both sides of the GFW.
[OP]
Posted by KoKansei
Subject: If Chinese miners are concerned that the GFW will affect their ability to process big blocks, why don’t they set up a node outside of China?
My question concerns the subject above.
No doubt Chinese bitcoiners are well-aware that an irreconcilable schism has occurred in the Bitcoin development sphere and this split has shaken many users’ confidence in the currency. As a result a majority of miners, including those in China, have expressed support for the Bitcoin Classic client, which will increase the upper block size limit. However, although many miners within China support classic, they have also expressed concerns about further increases in the block size going forward since the GFW may limit the bandwidth of their connection with nodes outside of China, thereby resulting in losses to their mining business.
As a mod of /btc (one of the largest uncensored forums outside of China) I would like to pose a question to the esteemed regulars of this board: if Chinese miners are concerned that the GFW will affect their ability to process large blocks, why don’t they set up nodes outside of China?
If this thread gets a fair number of responses I will repost your thoughts to /btc to promote an exchange of ideas between our two bitcoin communities. Thank you!
[Reply 1]
Posted by LaibitePool (LTC1BTC.com)
I would like to respond briefly as the manager of a mining pool.
  1. A new block can only be broadcast outward by a single node and two blocks which are produced simultaneously by two different nodes cannot be broadcast at the same time.
  2. For every second that a broadcasted block is delayed, there is a 1/600 chance that the network will produce a new block, so the risk of the block being orphaned increases by 1/600.
  3. Currently the majority of hashing power is concentrated in China and the state of China’s Internet within China is quite good so the nodes from which China’s pools initially broadcast are located in China.
  4. An initial broadcast to foreign nodes must get over the GFW. Currently all large mining pools have already established nodes outside of China, but they’re only there to speed up the whole process and do not allow circumventing of the GFW.
Supplementary Edit:
It is not at all uncommon for Internet traffic going across national borders to be relatively slow, so the issue can’t be entirely blamed on just the GFW. Speeds are largely affected by a country / region’s total international bandwidth limits as well as related network topology.
For example, we tested transmission from Shenzhen to Hong Kong and found that when you use suitable data centers the ping back and forth is less than 10 ms, but when you try and transmit a block from Hong Kong to the US or Europe (note that the GFW is not an obstacle here!) transmission is much slower than within China.
I’m not sure who first proposed the notion that “block transmission is affected by the GFW,” but I don’t think this notion is really accurate. Putting it like that gives people the impression that bitcoin has already been subjugated by some kind of evil organization, producing negative effects as well as conflict and division within the community.
It is more accurate to say: the transmission of blocks is limited by China’s outgoing international bandwidth availability which has always been poor. This is mostly because China’s domestic Internet is already sufficiently vast and the needs of the vast majority of users can be satisfied domestically. This is different that the US and Europe where almost all services involve transmission across national borders. If you’re interested in more details regarding China’s outgoing international bandwidth, you can take a look at a few reports, like “How Embarrassing! China’s Per Capita International Trunk Line Bandwidth is Only Half of Africa’s!”
[Reply 2]
Posted by KoKansei
Thanks a lot for taking time to post such a detailed response.
If I may, I’d like to ask two more questions:
(1) Given the current situation with the GFW, what do you think is the highest block size that Chinese miners are capable of dealing with? Is it possible to estimate such a number?
(2) My understanding is that the most important part of a new block is the header. Were a Chinese miner to establish a node outside of China then it should be possible for them to send just the header of any new blocks across the GFW to said node, where the block can be broadcast. Using this method, should solve the issue of having to transmit a whole block across the GFW. Are there currently any miners who are using or considering using this method?
Thanks again for all your insight!
[Reply 3]
Posted by Ma_Ya
I would like to respond briefly as a dedicated bitcoiner.
  1. I don’t think that the developer sphere has necessarily undergone a schism, it’s just that now there exists a new competing version. Even in the event of a schism it is still possible to restore consensus. The only people who have had their confidence shaken are a minority of bitcoin speculators and traders; the confidence of the majority of bitcoin fans / faithful will not be shaken simply because of a split among the developers. A split is nothing - even if all of the developers were to disappear bitcoin could continue to function. The core framework of bitcoin was already completed during Satoshi’s time and all that’s left now is a bit of tweaking and adjustment.
  2. Even if the GFW were to limit bandwidth, the miners’ business would not suffer - on the contrary it is the Western mining pools who would suffer losses. You have to realize that more than 50% of bitcoin’s hashing power is located in China, which is to say that the majority of new blocks are created in China. Once such a new block is created, it is first received by the nodes of other pools within China, after which it slowly makes its way over the GFW to the nodes of Western pools. That is to say the foreign pools are slower to receive blocks due to the GFW, which is actually beneficial to Chinese pools. Furthermore, it’s really not a big deal to transmit one or two MB worth of data in a 10 minute interval.
  3. It is feasible to set up a node outside of China, but would you be able to take all of your miners with you outside China? Furthermore, miners need to be associated with a mining pool and there are not that many mining pools outside of China, so you’d just end up having to connect to a Chinese pool anyway. You’d still need to send data to and from China, so getting over the GFW is still an issue. Actually this is all just theoretical; in reality bitcoin has not been blocked by the GFW and due to bitcoin’s decentralized nature it would be difficult to block bitcoin. You worry too much, OP.
[Reply 4]
Posted by LaibitePool (LTC1BTC.com)
  1. I just added some content to my other reply. Given China’s current international bandwidth limitations, I think that 4MB is a reasonable value. Actually I support first going to 2MB since 2MB is enough for now. Future lifting of the cap can be done when it’s necessary.
  2. China’s pools, under the guidance of F2Pool, have already employed the method you’re talking about. I suggest that Western pools also participate (as far as I know, there exist similar alliances in the West). Ideally Bitcoin Core should be upgraded to directly incorporate this functionality so that all pools can act as an interconnected subnetwork, solving the orphan problem. Once a block is released, each pool broadcasts to all standard nodes, thereby increasing the speed with which blocks propagate throughout the network.
[Reply 5]
Posted by Ma_Ya
I too would like to respond briefly to both of your questions.
(1) In theory they should be able to easily deal with sizes as large as 100MB. Blocks of this size could be transmitted in minutes with even a standard home connection and this time is significantly reduced for miners who maintain specialized high-speed connections. Ultimately there is no firewall blocking transmission between pools in China and in any case the sum of China’s hashing power is already over 51%.
(2) If you understood the principles of mining and what I said before, you wouldn’t ask this question. First of all, China’s mining pools are not in any rush to broadcast the nonce of a successfully mined block to nodes across the globe. It only needs to be received by several of the larger pools in China. This is because once it is received by several large pools in China, you’ve already reached more than half [of available hashing power], which is the same as achieving global consensus. When you look at it like this, Chinese miners should actually want there to be interference from the GFW to hinder Western pools. Also, you mentioned setting up a node outside of China and reconstituting [blocks] there, but in reality you wouldn’t save much time that way. Think about it: what is the big difference between transmitting 1MB or 2MB and a few KB? It's probably around nothing more than one second. 10 minutes and 1 second - that’s a factor of 600:1 which is trivial when you take into account the randomness of mining itself. Furthermore your proposition is only advantageous for Western pools and provides no benefit to Chinese pools.
[Reply 6]
Posted by hzq0760
It's not at all surprising that there is some controversy on this subject. The fact that one country has more than 50% of the hashing power and also [translator's note: the sentence cuts off abruptly here with four dashes. Possible auto-censor?]. It's definitely a problem. China's mining pools should do something to resolve this issue.
Note that some posts in this thread were omitted from the translation due to time constraints.
submitted by KoKansei to btc [link] [comments]

A proposal for a simple, inexpensive, and effective way to end Chinese Miner dominance of Bitcoin.

Simply start embedding pro Falun Gong, Tiananmen Square protests of 1989, freedom of speech, Taiwan independence, Tibetan independence movement, etc. literature into the Blockchain. It would probably be best to put this literature in Chinese, as well as English. Contacting some of the various Chinese dissident groups and having them directly participate will add legitimacy and fuel the flames of outrage the tyrannical Chinese government would feel at this attack upon their censorship of pro-freedom information now bypassing the Great Firewall of China and making them look like fools. Inititally the Chinese miners will likely try to exclude transactions with such information, and they will still get through when mined by non-Chinese Miners. The Chinese Government will have to force the miners to introduce a fork to scrub that content from the Blockchain or shut down, and they will choose the fork option. Then just don't use their new coin.
They will have their new Alt China-BitmainCoin, and Bitcoin will remain the same as it ever was. It would also then exclude them from mining the original Bitcoin Blockchain that contains the illegal (In China) literature in it, so it would effectively remove the threat of attacks on the old chain by the massive amount of mining hardware centralized in China.
Here you go: http://www.cryptograffiti.info/ Looks like it is about a penny a letter to use this service, cheaper or even free if you have the where with all to do it yourself and attach the messagesI think. Images too, of course, the obvious one would be the outlawed Tank Man pictures. Even if the Chinese government decides for whatever reason to turn a blind eye to this pro-freedom information being distributed via the Bitcoin blockchain on nodes operated by Chinese Miners, you will still be doing the free world and Bitcoin a service by showing Bitcoin to be a true beacon of Free Speech in an ever darkening world and only be out a days lunch money for doing so. Why let the Chinese Miners dictate their vision of Bitcoin and reap profits taken from you when you have the means to stop them?
submitted by Barkey_McButtstain to Bitcoin [link] [comments]

If Bitcoin usage and blocksize increase, then mining would simply migrate from 4 conglomerates in China (and Luke-Jr's slow internet =) to the top cities worldwide with Gigabit broadban - and price and volume would go way up. So how would this be "bad" for Bitcoin as a whole??

If Bitcoin usage and blocksize increase, then mining would simply migrate from 4 conglomerates in China (and Luke-Jr's slow internet =) to the top cities worldwide with Gigabit broadban - and price and volume would go way up. So how would this be "bad" for Bitcoin as a whole??
https://www.linkedin.com/pulse/20141104014739-90103575-top-24-cities-with-fastest-internet-speeds-in-2014
Top 24 Cities Having Fastest Download/Upload Speeds in 2014
The 2014 Cost of Connectivity report, which was produced between July 2014 and September 2014, says the top 24 cities having the fastest download/upload speeds in terms of Gigabytes per second (Gbps), equivalent to 1,000 Megabytes per second (Mbps), are as ranked in the above titled chart and listed below:
1 Gbps:
(0.5 Gbps upload speeds) ^
(0.3 Gbps upload speeds) ^
(~0 Gbps upload speeds) ^
0.5 Gbps:
0.35 Gbps:
0.24 Gbps:
0.2 Gbps:
0.152 Gbps:
(~0 Gbps upload speeds) ^
Would mining still be "decentralized" enough if it simply spread out to these cities?
The only danger I could think of would be a few weeks where ASICs would frantically get shipped from locations with slow internet to locations with fast internet.
But mining would go on. Miners are always gonna mine.
Our discourse needs to take into consideration the following possibilities:
(1) The current concentration of mining power among a mere 4 mining conglomerates in China may be a by-product of the current mining parameters themselves - ie:
  • the availability of ASICs,
  • cheap electricity in China,
  • the arbitrary, artifificial 1 MB max block size (a temporary cap intended to fight spam - which now might actually help spammers)
  • slow internet in and out of China (across the Great Firewall?)
(2) Every different combination of these peramaters may favor some geographic regions more over others in terms of mining
Proposition:
It is not the responsibility of Bitcoin to worry about favoring some geographic locations for mining over others.
It is not the responsibility of Bitcoin to worry about favoring existing, incumbent miners over new, future miners (possibly in different locations).
Bitcoin's only responsibility is to favor its Users - by supporting increasing volume and value.
If Bitcoin's need for speed sets off a global internet bandwidth arms race (as countries discover that bandwidth = money), then that would be a nice side-benefit.
submitted by LazLO-LULZkash to btc [link] [comments]

I think the Berlin Wall Principle will end up applying to Blockstream as well: (1) The Berlin Wall took *longer* than everyone expected to come tumbling down. (2) When it did finally come tumbling down, it happened *faster* than anyone expected (ie, in a matter of days) - and everyone was shocked.

Centralization is a double-edged sword.
So far, centralization (and intertia, and laziness, and caution) has been favoring Blockstream.
But if and when a congestion crisis comes, then the tide is gonna turn pretty quickly - and Blockstream's monopoly in terms of "code running on the network" is gonna evaporate quicker than anyone expected.
How will this happen?
Like this:
Bitcoin is going to go into a crisis - not just the current agonizing slow-motion swamp of centralized fascist governance, but a real-time honking red alert involving a clogged-up network, with people freaking out screaming from the rooftops that millions of dollars in transactions are in limbo due to some pointless fucked-up 1 MB "blocksize limit".
And at that point, people are going to get rid of the damn piece of broken cripple-code, immediately.
End of story.
Slow to crumble, fast to collapse
Up till now, the Bitcoin governance crisis has been like slowly sinking into a swamp of quicksand.
But once a real-time congestion crisis actually hits (and online forums become dominated by posts screaming "my transaction is stuck in limbo!!!"), then all the previous bullshit and bloviating from economic idiots about "fee markets" and "soft hard forks" or whatever other nonsense will be instantly forgotten.
And at that point, there will be only 2 things that can happen:
You don't need Blockstream - they need you
When push comes to shove, people are going to remember pretty damn quick that open-source code is easy to patch.
People are going to remember that you don't have to fly to meetings in Hong Kong or on some secret Caribbean island ... or post on Reddit for hours ... or spend hundreds of thousands of dollars on devs ... in order to simply change a constant in your code from 1000000 to 2000000.
Eventually, we are going to remember what vote-with-your-CPU consensus looks like
Remember all those hours you wasted on reddit?
Remember all that time you wasted in some hidden downvoted sub-thread debating with some snarky little toxic troll who'd wandered over from a censored Milgram experiment forum full of brainwashed circlejerkers and foot-stomping fascists whose only adrenaline rush and power trip in life had evidently been when they would run around bloviating gibberish like "fee markets!" or "Austrian!" to the self-selected bunch of ignorant submissive sycophants who hadn't been banned from r\bitcoin yet?
Well, when the real crisis hits, all that trivial online drama isn't going to matter any more.
When the inevitable congestion crisis finally comes, it's only going to take a couple of mining pools plus a couple of exchanges to make a simple life-or-death business decision to un-install Blockstream's artificially crippled code and instead install code that has actually been upgraded to deal with the reality of mining and the marketplace - and then we're all going to see what actual vote-with-your-CPU consensus really looks like (instead of vote-with-your-sockpuppet pseudo-consensus on Reddit).
This upgraded code could be Classic, or Unlimited, or even a modded version Core - it doesn't really matter.
Code is code and money is money, and when push comes to shove, investors and miners aren't going to give a damn what some overpaid economic idiot from Blockstream said at some meeting in Hong Kong once, or what some fascist poisonous astroturfing shill-bot posted a million times on Reddit.
Things usually move slow in Bitcoin-land - except when they move fast
For an example of how fast the tide can turn, just look at a couple of major events from the past two days:
(1) Coinbase is suddenly saying that:
Of course the good devs are flocking to Ethereum now.
Any smart dev can see from a mile away that it would be suicide to try to contribute to Core/Blockstream - Blockstream don't want any new coders or new ideas, they are insular and insecure and they feel downright threatened by new coders with fresh ideas.
They've shown this over and over again, eg:
(2) AntPool is suddenly throwing down the gauntlet, saying they won't do SegWit unless and until they get a hard fork first.
AntPool represents a pretty big chunk of hashrate - so all it's gonna take is another big chunk of hashrate to make the same practical business decision as AntPool (to serve Bitcoin users, instead of serving Blockstream) - and boom! - Blockstream loses their stranglehold on the miners.
Devs don't like dicatorships
Blockstream is too jack-booted lock-step to ever attract any more new dev talent.
This is because good devs are very independent-minded: they can smell a dicatorial organization from a mile away, and so no good dev in their right mind (who might actually have some interesting new ideas that could help Bitcoin) would ever go near Blockstream and its toxic group-think culture.
And so Blockstream will just continue to stagnate under Gregory Maxwell's oppressive "leadership":
Blockstream has backed themselves into a corner
At this point, people are starting to realize that Blockstream is a led by desperate and incompetent dead-enders.
(There are some great coders over there such as Pieter Wuille - and Greg Maxwell is also a great Bitcoin coder, but he is toxic as a "leader".)
Blockstream can't do capacity planning, they can't do threat assessment, they can't innovate, they can't prioritize, and they can't communicate.
In the end, they're only destroying themselves - by censoring debate, and ostracizing existing innovators (eg, Mike Hearn and Gavin Andresen) - and scaring away potential new innovators.
Remember, Blockstream != Bitcoin
It's important to remember that Blockstream cannot destroy Bitcoin - any more than Mt Gox could.
Once Blockstream is thoroughly discredited in the eyes of the Bitcoin community and the media, as "the company that almost strangled the Bitcoin network by trying to force blocks to be smaller than the average web page" - it's gonna be time for honey-badger jokes all over again.
Blockstream's gargantuan conflicts-of-interest will be their downfall
Blockstream is funded by insurance giant AXA - a company whose CEO is the head of the friggin' Bilderberg Group. (He's scheduled to move from CEO of AXA to CEO of HSBC soon. Out of the frying pan and into the fire.)
AXA doesn't even want cryptocurrency to succeed anyways, because half of the 1 trillion dollars of so-called "assets" on their fraudulent balance sheet is actually nothing more than toxic debt-backed worthless derivatives garbage. (AXA has more derivatives than any other insurance company.)
In other words, AXA's balance sheet will be exposed as worthless and the company will become insolvent (just like Lehman Brothers and AIG did in 2008) once real money like Bitcoin actually becomes dominant in the world economy - which will "uber" and knock down the whole teetering $1.2 quadrillion derivatives casino.
Hmm... AIG... a giant insurance group whose alleged "assets" turned out to be just a worthless pile of toxic debt-backed derivatives on the legacy ledger of fantasy fiat, AIG who triggered the 2008 financial near-meltdown... Who does AIG remind me of... Oh yeah AXA... So let's put AXA in charge of paying for Bitcoin development! What could possibly go wrong?!?
Blockstream's owners HATE Bitcoin
Never forget:
This is the probably the most gigantic CONFLICT OF INTEREST in the history of economics. And it's something to think about, as we sit here wondering for years why Blockstream is not only failing to scale Bitcoin - but it's also actively trying to SABOTAGE anyone ELSE who tries to scale Bitcoin as well.
So, be patient - and optimistic
Viewed from one perspective, the fact that this blocksize battle has dragged on for years can be very depressing.
But, viewed from another perspective, the fact that it's still going on is positive - because, for example, nobody really dares to say anymore that "blocks should be 1 MB" - since repeated studies have shown that the current hardware and infrastructure could easily handle 3-4 MB blocks, and Core/Blockstream's own precious SegWit soft-fork is going to need 3-4 MB blocks anyways.
Plus, the only "strengths" that Blockstream had on its side actually turn out to be pretty weak upon closer scrutiny (money from investors like AXA who hate cryptocurrency, censorship from domain squatters who only know how to destroy communities, snark from sockpuppets who can't argue their way out of a wet paper bag on uncensored forums).
In fact, if you were part of Blockstream, you'd be pretty demoralized that a rag-tag bunch of big-blocks supporters has been chipping away at you for the past few years, creating new forums, creating new coins, creating new products and services, exposing the economic ignorance of small-block dead-enders - and all the while, Blockstream hasn't been able to deliver on any of its so-called scaling roadmap.
If it hadn't been for a few historical accidents (cheap energy behind the Great Firewall of China, plus the other "linguistic" firewall that has prevented many people in the Chinese-speaking community from seeing how much of the community actually rejects Blockstream, plus the other accidental fact that bigger blocks involve generalizing Bitcoin, which mathematically happens to require a hard fork), then Blockstream would not have been able to control Bitcoin development as long as it has.
Yeah, they have done routine maintenance stuff and efficiency upgrades, like rewriting libsecp256k, which is great, and much appreciated - and Pieter Wuille's SegWit would be a great refactoring and clean-up of the code (if we don't let Luke-Jr poison it by packaging it as a soft-fork) - but the network also needs some simple, safe scaling.
And the network is going to get simple, safe scaling - whenever it decides that it really, really wants it.
And there's nothing that Blockstream can do to block that.
submitted by ydtm to btc [link] [comments]

The /r/btc China Dispatch: Episode 4 - Block Size, Chinese Miners and The Great Firewall: Part Two

Hello, Dear Reader, and welcome back to another exciting edition of the /btc China Dispatch. In this series of posts, your humble correspondent translates up-to-the-minute bitcoin banter and news from across the Chinese internet into English for your edification and entertainment!
For those of you who have missed the last three episodes of the /btc China Dispatch, you can catch up via the following links (in order of oldest to newest):
https://www.reddit.com/btc/comments/412afd/the_rbtc_china_dispatch_episode_1_china_reacts_to/ https://www.reddit.com/btc/comments/4184k3/the_rbtc_china_dispatch_episode_2_why_doesnt/ https://www.reddit.com/btc/comments/41dizq/the_rbtc_china_dispatch_episode_3_block_size/
In the last episode, by popular demand I posed the following question to the Chinese bitcoin community: “if Chinese miners are concerned that the Great Firewall of China will affect their ability to process large blocks, why don’t they set up nodes outside of China?”
The 8btc.com community responded with far more enthusiasm for this question than I anticipated, and expressed their desire to open an ongoing channel of communication with the English-speaking community of /btc. As such, this episode will pick up where the last one left off, taking a look at some of the posts that were made in the thread started yesterday after Episode 3’s translation was already complete. As this episode is a “sequel” to Episode 3, I highly recommend you check out Episode 3 before reading further if you have the time.
As the Chinese thread has gained a lot of attention, tomorrow I will select a few of the most upvoted posts in this thread for translation into Chinese so that hopefully we can establish an ongoing dialogue.
Edit: Some people have asked me to post a personal bitcoin address. Here you go: 1Jph5qBjcBPmp1ebMhALomLE4PzaMP18Yp
[Response 1 - Edited]
Posted by LaibitePool (LTC1BTC.com)
I would like you to translate the following message for me, if you could:
It may be that China is painted as a country which is governed by an evil dictatorship in propaganda that is seen by many people.
However, this isn’t the real China. The Chinese government does indeed have many problems, but they are constantly improving.
For example, they are making progress in terms of bitcoin. The Chinese government believes that bitcoin is a legal commodity and can be traded between individuals; they have only forbidden financial institutions from getting involved in bitcoin.
This attitude is much more enlightened and self-assured than that of Russia: Russia came right out and announced that bitcoin is illegal.
I think that the most appropriate way of describing the rule of the Chinese government is: “paternal rule.”
It is true that under the rule of a large government, we are not that free, but this is not without its advantages. For example, it is safer in China for the average man on the street than it is in Europe or the US. A girl can go out alone at 3 AM in the morning for some street barbecue, something that is pretty much unimaginable in many Western countries.
We enjoy highly effective and cheap public transportation and health care. Whereas you need to reserve an appointment for emergency medical treatment a week in advance in the US or Europe, such a situation is unimaginable in China.
China has a population of 1.3 billion people including 670 million internet users. The number of internet users in China alone is already double the entire population of the US. With such a massive and unified market, the level of development of internet service is already better than Europe and catching up with the US. Furthermore, with its massive advantage in terms of manufacturing it was inevitable that China would have an advantage in terms of hashing power. I hope that everyone can understand China’s hashing power and understand China.
Please send us a link when you’ve reposted this so everyone can discuss it. =)
[Response 20]
Posted by feifei0375
I support foreigners engaging in mining and buying Chinese-made mining equipment to compete with [China’s] miners. It is thanks to Chinese people having spent a lot of money buying mining equipment that bitcoin is secured by tremendous hashing power. There wouldn’t be the bitcoin of today without Chinese people.
[Response 21]
Posted by Seven_Steps_to_Heaven
@ feifei0375
The main underlying reason that foreigners don’t mine is because the cost of mining is too high and profits are too low. When you factor in labor costs, administrative costs, power costs and data center construction costs, foreigners don’t have a prayer of competing with China. When you lose money just by powering up your miner, who is going to do it?
There’s a reason that people say that China is a manufacturing powerhouse and the world’s factory.
[Response 22]
Posted by vatten
What I don’t understand and would like for someone from a pool to explain to me is: why is it that with the hashing power in China, pool nodes also have to be in China? As far as I understand it, the amount of data that is transmitted between the miners who mine with a given pool and the pool servers is very small and only allocation of work is need so there’s no transmission of transaction data involved and as a result there’s very little bandwidth used.
The mining pools could establish themselves in a neutral location such as Hong Kong or Singapore, areas where data speeds are basically the same relative to all other locations. As far as miners are concerned, of course they are going to deploy in regions that have cheap power and labor, but the two are basically completely decoupled.
[Response 24]
Posted by sabreiib
@vatten
When at some point in the future the government comes to seize miners’ equipment, it’s the equipment that will be a vital weak spot.
Of course the government is content right now to watch the current spectacle. If bitcoin fails it doesn’t matter at all to them and even if it succeeds with most of the hashing power in China the government can pull the cord any time they want, so they’re not in any hurry. When the government decides to make a move, they will do so in secret initially and the Western community will be “like a fish between a cutting board and a sharp knife.”
[Response 25]
Posted by vatten
@sabreiib
If they seize the miners all that will happen is the hashing rate will go down, and the other miners will get to mine more blocks. There’s nothing to worry about as long as there are pools outside of China.
[Response 26]
Posted by Joomla_Zhou_Zhaohui
China isn’t an evil country, despite the fact that it is chaotic. However, there is no doubt that a certain p4rty and a certain g0vernment are evil.
[Response 28]
Posted by fuck7b
The Great Fire Wall is nothing but an excuse. It’s obvious that Chinese miners are worried that a lifting of the cap as proposed in XT and the like will increase their bandwidth and storage costs. They are only concerned with short term profit and are hindering bitcoin’s long term development. Miners currently widely support the proposal to lift the limit to 2MB, but even if the cap is lifted to 2MB following a hard fork it won’t be long before the transaction bottleneck becomes the same issue it has become at 1 MB and the network is unable to satisfy international transaction needs. A graded expansion of the limit is the best thing for bitcoin’s development.
submitted by KoKansei to btc [link] [comments]

Report Links 74% of Bitcoin Mining to China, Sees Threat ... Asia Crypto Today - YouTube ❗️Update❗️- Substratum  Successful Bypass of the Great Firewall of China! China Tries to Block all Bitcoin Transactions ENVION Crypto Infrastructure

Because of the way the Great Firewall works, miners in China regularly get some answers concerning new squares before miners in different nations. Since China additionally as of now holds a dominant part of the hashing power on the system, miners who are not in China wind up missing out on a touch of income. This is because of the way that, overall, mineworkers outside of China will find out ... BTC China CEO and Bitcoin Foundation board-member elect Bobby Lee thinks China may be blocking bitcoin-related sites. Speaking with CoinDesk today, Lee said that he’s hearing regular reports of users being unable to access websites such as Coinbase, Blockchain and BTC-e in otherwise normal browsing sessions. Lee reports that the “locking of these websites seems … Bitcoin, the world’s most sought-after cryptocurrency, could be at the wish and whim of Asia’s economic giant. A recent study titled “The Looming Threat of China: An Analysis of Chinese Influence on Bitcoin,” jointly researched by Princeton University and Florida International University researchers, suggests China’s mining scene has an overwhelming influence over Bitcoin, something ... During his tests, he found it took anywhere between 15 and 150 seconds to send block data to another peer when the two parties were on opposite sides of the Great Firewall of China. At the Bitcoin Foundation ’s DevCore Workshop back in October, Bitcoin Core Developer Gregory Maxwell explained that the second-to-last mining pool to learn about a new block is currently dealing with a 5 percent ... Top 3 Bitcoin mining news stories today. Kraken makes exchange better for U.K. traders and BCH community. All Price analysis Trading guides. Bitcoin Master Guide; Home Bitcoin. The ‘Great Firewall’ can’t stop OTC Bitcoin Trading in China. Bitcoin; Buy/Spend; Trading ; The ‘Great Firewall’ can’t stop OTC Bitcoin Trading in China. By Lutpin - March 18, 2017. 182. 0. SHARE. Facebook ...

[index] [16783] [6522] [36494] [21784] [7701] [29019] [2239] [33182] [15077] [5177]

Report Links 74% of Bitcoin Mining to China, Sees Threat ...

http://cryptoconsumer.com/china-tries-block-bitcoin-transactions/ China might be gearing up to stop Bitcoin exchanges within their corner of the internet. Th... China has banned bitcoin yet it's nation has a high concentration of mining machines. Did this happen by accident? Link to story and other stories mentioned:... The China ban is complete making it almost impossible for the Chinese Yuan to enter the cryptoeconomy. What does this mean for the future of blockchain techn... Tom Masiero, Great American Mining, gives an update from his Bitcoin mining endeavors across Shale Play USA. Masiero explains how the Bakken’s marketplace has been created due to regulations ... Inside a Bitcoin mine that earns $70K a day ... Great Home Design Decorations Recommended for you. 3:11. Euthanizing of a dangerous hive. - Duration: 35:01. nwnjba Recommended for you. 35:01. A ...

#