Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于新的VMess的设计的一些不成熟构想 #711

Closed
p4gefau1t opened this issue Jun 1, 2020 · 48 comments
Closed

关于新的VMess的设计的一些不成熟构想 #711

p4gefau1t opened this issue Jun 1, 2020 · 48 comments
Labels

Comments

@p4gefau1t
Copy link

p4gefau1t commented Jun 1, 2020

首先非常感谢v2ray社区的开发者们,在非常短的时间内非常积极地响应和参与到漏洞的修补过程中,为v2ray社区的健康成长作出了无私奉献。我们陆续发现的漏洞在事实上给他们造成了不小的麻烦,在这里也表示歉意。辛苦你们了!

但是,我们还是应该面对现实。由于VMess协议的设计和实现出现了一些问题,目前的缓解措施仍然不能很好消除特征。

v2ray/v2ray-core#2523

这种攻击的思路和2017年shadowsocks遭到的重放攻击如出一辙。究其原因,是采用了不带认证机制的加密模式,或者说不安全的完整性校验方式。攻击者可以通过密文填充和侧信道攻击的方式泄漏信息。

Shadowsocks在此之后引入了AEAD的加密模式来解决这个问题。我认为我们可以借鉴和参考。

本人并非v2ray核心的开发者,但和qv2ray的开发者们一起共同审计代码,发现了这v2ray这几个存在时间很长、而且比较严重的问题之后,我斗胆认为, 为了彻底解决这个问题,我们有必要设计新的VMess协议替代当前的协议。由于VMess版本号是1,这里为了方便,新的VMess协议我姑且称之为VMess2。

个人有一些不成熟的想法,在这里妄自发出来,供大家批判和讨论,以求抛砖引玉,集思广益

  1. VMess2协议不应该兼容旧版协议,否则必然引入协商和向下兼容的问题,可能遭到降级攻击,或是带来更多特征。VMess2不应该为未来的新协议的向下兼容做准备(如预留某些字段),否则可能成为新协议的累赘,或是成为协议本身的漏洞(VMess的版本号字段,直接导致特征问题难以缓解)。

  2. VMess2应该与VMess一样,是基于可靠传输的协议,其传输层是可插拔的

  3. VMess2是加密代理协议,并且密文流量不应该含有明显特征

  4. VMess2能保证承载数据的保密性,实现身份认证和前向安全

  5. VMess2协议应该能抵抗重放攻击,密文填充攻击等目前常见的来自GFW的主动探测

  6. VMess2的在性能方面应该尽可能优化,尤其对于移动设备

  7. VMess2的协议的制定,应当经过社区广泛的讨论、审核和密码学方面的论证,最终代码实现必须经过严格的安全审计和攻击测试

上面是我个人的一些不成熟的想法,欢迎大家讨论和批评。需要指出的是,我们不应该急于决定是否应该制订和设计VMess2,或是VMess2的具体格式和标准应该如何,而是等待社区进行足够多的讨论,建议社区的开发者们将精力集中在缓解措施和修补漏洞上。

@89650021
Copy link

89650021 commented Jun 1, 2020

就像nativeproxy的作者指出的那样,重复发明TLS总会有各种问题。我的想法是协议只负责鉴权,加密还是交给久经考验的tls比较好?

@devyujie
Copy link

devyujie commented Jun 1, 2020

这恐怕需要一个很长的过程

@iseki0
Copy link

iseki0 commented Jun 1, 2020

iseki个人的观点也是彻底废弃掉vmess raw. 要求所有用户切换至 Websocket + VMess + TLS 这种。即使使用随机延迟关闭连接也不可以,问题的根本似乎根本无法被妥善解决,合理的办法大概是让特征尽可能贴近现有的协议,那么看来TLS成为了最合适的选择。

@Walkerby
Copy link

Walkerby commented Jun 1, 2020

走类似隔壁trojan的思路会不会更好?

@p4gefau1t
Copy link
Author

就像nativeproxy的作者指出的那样,重复发明TLS总会有各种问题。我的想法是协议只负责鉴权,加密还是交给久经考验的tls比较好?

确实,TLS更加安全和隐蔽。

但是翻墙用TLS,总感觉有杀鸡用牛刀的感觉。我的理解是VMess2可以作为一个轻量的方案,作为TLS的补充。尚且,他是应用层协议,传输层是可以插拔的嘛,这个和TLS并不冲突。

@RPRX
Copy link

RPRX commented Jun 1, 2020

就像nativeproxy的作者指出的那样,重复发明TLS总会有各种问题。我的想法是协议只负责鉴权,加密还是交给久经考验的tls比较好?

确实,TLS更加安全和隐蔽。

但是翻墙用TLS,总感觉有杀鸡用牛刀的感觉。我的理解是VMess2可以作为一个轻量的方案,作为TLS的补充。尚且,他是应用层协议,传输层是可以插拔的嘛,这个和TLS并不冲突。

按这样的说法,那么天天浏览https网站,以及手机app的tls请求,就是核弹了(如有冒昧请见谅

@ghost
Copy link

ghost commented Jun 1, 2020

按这样的说法,那么天天浏览https网站,以及手机app的tls请求,就是核弹了(如有冒昧请见谅

RTT如何?有人还在用ss协议的原因之一就是因为ss在开启TFO时可以抠下来一个RTT。

@iseki0
Copy link

iseki0 commented Jun 1, 2020

如果利用TLS 0RTT呢,能够优化到什么程度

@RPRX
Copy link

RPRX commented Jun 1, 2020

按这样的说法,那么天天浏览https网站,以及手机app的tls请求,就是核弹了(如有冒昧请见谅

RTT如何?有人还在用ss协议的原因之一就是因为ss在开启TFO时可以抠下来一个RTT。

TLSv1.3 服务端开启 earlydata 可以 0RTT

@RPRX
Copy link

RPRX commented Jun 1, 2020

如果利用TLS 0RTT呢,能够优化到什么程度

事实上,一次 Websocket 连接是很长的,所以对握手阶段的优化可能影响不大

@Leo-Mu
Copy link

Leo-Mu commented Jun 1, 2020

@studentmain @cpdyj @RPRX 你们要不要注意一下 QUIC 或者更进一步说 HTTP/3

@ghost
Copy link

ghost commented Jun 1, 2020

@studentmain @cpdyj @RPRX 你们要不要注意一下 QUIC 或者更进一步说 HTTP/3

首先,目前看来QoS如何?

@RPRX
Copy link

RPRX commented Jun 1, 2020

关于 TLS 的讨论,建议移步 v2ray/v2ray-core#2526

@ghost
Copy link

ghost commented Jun 1, 2020

shadowsocks/shadowsocks-org#157 鉴于大家都在设计新协议了,可能在这里提一嘴shadowsocks的新协议计划比较好。

@cecini
Copy link

cecini commented Jun 2, 2020

嗯,直接考虑协议承载在quic(tls1.3)之上。

@henrypijames
Copy link

No! The new protocol must not be (either exclusively or mainly) based on UDP, like QUIC/H3, because they are increasingly being QoS'ed, to the point of no longer being usable.

I like H3 and I'd love to switch to it. I also love other UDP based protocols and tried WireGuard very early on. But a few weeks ago (around the time WG version 1.0 came out), WG went from working smoothly and clearly outperforming V2 to not working at all, because of QoS - or you can call it blocking, because >99.9% of traffic is not coming through. Same process, through less dramatic, with H3. Anecdotal reports suggests that this is happening across different cities/provinces and ISPs.

The problem is that there are currently very few early adopter of H3, especially inside China, so ISPs have little to loose and much to gain from QoS'ing UDP. Of course, you can put a fake WeChat Video header, but we can't have a protocol that relies on faking as something that is third-party proprietary and subject to arbitrary change. So until H3 becomes mainstream, we have to treat it as absolutely unreliable.

@proletarius101
Copy link

proletarius101 commented Jun 4, 2020

No! The new protocol must not be (either exclusively or mainly) based on UDP, like QUIC/H3, because they are increasingly being QoS'ed, to the point of no longer being usable.

A more sustainable structure could be vmess v2 OVER whatever version of http OVER TLS, exactly how v2ray was designed.

@est
Copy link

est commented Jun 4, 2020

建议协议应该设计成全程可配置的。每个人安装之后随机生成一套协议。

协议多,会降低单个协议的破解难度,但是会极大增加GFW研发人员获得 KPI 的难度。因为你不可能这个月工作报告,写我端掉了某个协议,战果是只有个位数的用户被封。。。。

要站在互联网管道中间黑盒子供应商的角度来思考什么样的协议是最讨厌的。一种思路是加防御的buff(设计精妙绝伦难以识别的流量),另一种思路是降低命中的 debuff (每一次命中都只能打击部分无足轻重的小目标)。

还有个好处是,协议可以随时换。你今天在盒子里上了套流量分类规则,明天别人流量特征就变了。。。久而久之,就会扩大非技术官僚和技术基层骨干之间的不信任程度。让破解协议的行为成为重劳动低回报的岗位,这样就没人去干这种破事了。

当然,这一切前提是基于一个假设,那就是协议能被随机组合出来并且高效运作一段时间,而且找不到通杀的识别办法

@henrypijames
Copy link

henrypijames commented Jun 4, 2020

当然,这一切前提是基于一个假设,那就是协议能被随机组合出来并且高效运作一段时间,而且找不到通杀的识别办法

Right, dream on. Since the invention of cryptography, people have been struggling to design a single protocol that is secure. Now, you want to simply jump ahead and design an entire system that automatically generate protocols that are all secure? If you could, it would be one of the greatest achievements in the history of the field, more impressive than the invention of the Enigma machine or its defeat at the hands of Alan Turing.

More fundamentally, diversity only provides more security if the diversity is real - your protocols must bear no similarity with each other. Otherwise, whatever is common between them is a key characteristic to be recognized (and attacked), and now the diversity has become a liability. To achieve true randomness, however, is a literally astronomical task (as in, scientists are trying to use the light from pulsar stars as a source of random data), probably not achievable without significant advancement in quantum physics.

久而久之,就会扩大非技术官僚和技术基层骨干之间的不信任程度。让破解协议的行为成为重劳动低回报的岗位,这样就没人去干这种破事了。

This is intelligence gathering we're dealing with. Intel jobs are different from normal (tech) jobs, and it's managed differently, too. Making an intel job hard doesn't lead to the job being abandoned - it leads to more resources being poured into it. You may want to read a few books (even fictional novels) or watch a few movies (but ones more realistic than 007) on how intel gathering works in the real world.

@est
Copy link

est commented Jun 4, 2020

You may want to read a few books (even fictional novels) or watch a few movies (but ones more realistic than 007) on how intel gathering works in the real world.

呃。我身边就认识启明星辰的人。。。。。你所说的书有讲他们的吗?

Since the invention of cryptography, people have been struggling to design a single protocol that is secure

你这句话讲得非常好。加密系统很难设计,而且很容易出错。但是反过来想一下,加密系统适用的是什么场景?不要脱离场景谈安全。。人民群众面对企业雇佣的网络协议逆向研究员,只有数量优势。没有技术优势。那么就只能拿交换比来争取时间。或许1个 researcher 能击败n条协议,比如你破解一个 socks5 协议很容易,但是要破解100个 socks5 的变体就变成体力活了。我们能争取的就是让协议的丰富程度超过他们所需耗时的总长,让其产出远远不值得投入。就像日本鬼子在华北占领区打治安战一样。越打约入不敷出。武器和技术的优势就逐渐越来越不明显了。

@ghost
Copy link

ghost commented Jun 4, 2020

当然,这一切前提是基于一个假设,那就是协议能被随机组合出来并且高效运作一段时间,而且找不到通杀的识别办法

Right, dream on. Since the invention of cryptography, people have been struggling to design a single protocol that is secure. Now, you want to simply jump ahead and design an entire system that automatically generate protocols that are all secure? If you could, it would be one of the greatest achievements in the history of the field, more impressive than the invention of the Enigma machine or its defeat at the hands of Alan Turing.

This idea is more like "DDoS" the firewall developer. It's more about steganography, not cryptography. Generated protocol may not secure at all, but they'll somehow hard to identify and had different "fingerprint".

@est
Copy link

est commented Jun 4, 2020

Generated protocol may not secure at all, but they'll somehow hard to identify and had different "fingerprint"

That's the point. We can even bait the protocol to be easy to identify, but for each obvious filter rule you have to multiply the millions of variations. Keeping track of the mutations would be extremely error-prone and may leave loopholes in the middle boxes themselves.

IIRC some of the DNS 0day is still unfixed by the middle box because the protocol is too versatile, when you write a complex filter code in C or w/ DPDK it will lead to memory overflow problems. When the arm of the middle box over stretches, the asset become a liability.

@henrypijames
Copy link

henrypijames commented Jun 5, 2020

人民群众面对企业雇佣的网络协议逆向研究员,只有数量优势。没有技术优势。那么就只能拿交换比来争取时间。

This is not true. The very nature of modern cryptography is that encryption should be easier (by orders of magnitude) than decryption without the required key. In other words, a properly designed defense should be able to withstand offense of a vastly more resourceful attacker (much like a medieval wall, or the Great Wall - look at that, the analogy has come full circle). If, on the other hand, you believe no such algorithmic asymmetry exist, and the only way to keep up with a technically superior opponent is guerilla warfare, than the entire field of modern cryptography is pointless to you.

I also believe you're overestimating the number of people who would use non-trivial (as in, customization required) wall-circumvention tools (we're significantly less than 0.1% of the population, and always will be), and underestimating the number of technical, financial and human resources the maintainers of the wall is willing to throw at you (they could mobilize more than 1% of the population against any target of their choosing whenever they need to). So, instead of drowning them in the sea of your "people's war", I'm afraid it is you who will be drowning in their sea of people's war.

@henrypijames
Copy link

henrypijames commented Jun 5, 2020

或许1个 researcher 能击败n条协议,比如你破解一个 socks5 协议很容易,但是要破解100个 socks5 的变体就变成体力活了。我们能争取的就是让协议的丰富程度超过他们所需耗时的总长,让其产出远远不值得投入。

You're still basing your argument on the fundamentally mistaken believe that more is better, while in fact, the opposite it true. In mathematical terms, if O(n) is the complexity (or difficulty to break) of a system with n sets of parameters, then you believe O(100) = 100 * O(1), or at least O(100) > 10 * O(1), whereas actually, O(100) < O(1).

Let's come back to the Enigma machine. Its ingenuity was the "rotor" which changed the cypher with every key stroke. So, in a way, every message was sent with a different set of parameters. Now, were the Allies drowned out by the vast number of encrypted messages they intercepted, or were they able to use that vast number to crack the encryption? We know the latter was the case, because Alan Turing and his people figured out that many messages shared a commonality - they would end with "H*** Hitler". This attack vector was unforeseen by the designer of the Enigma machine, and the German field commanders using the machines weren't aware how fatal this seemingly innocuous (at least for them) phrase would turn out to be.

If you design a system that automatically generates sets of parameters, they will certainly not be truly random, and will contain some commonality. Even in the extremely unlikely case that your design is flawless like the Enigma, your users won't be more careful and knowledgeable than the German field commanders, so sooner or later, a commonality will emerge. The more sets of parameters you generate, the more obvious that commonality will be, and the easier it becomes to crack the system.

If your system is watertight, one protocol is enough (from a security perspective, that is - you'll probably need more protocols for different features); if it's not watertight, the more protocols, the more it will leak.

@xiaokangwang
Copy link

感谢 @p4gefau1t @est @henrypijames 的贡献,目前来看VMess的目标是首先开发一个没有已知重放+主动探测弱点的协议。我已经看到了你们的讨论,会在未来参考这里的讨论来进一步改进V2Ray。请大家继续讨论,我会在之后发布一些和你们的讨论有关的思考和构想。

@est
Copy link

est commented Jun 5, 2020

encryption should be easier (by orders of magnitude) than decryption without the required key. In other words, a properly designed defense should be able to withstand offense of a vastly more resourceful attacker

Uhhhh, the point is not to decrypt HTTPs, the cracker just need to guess what kind of if the traffic looks suspicious and drop the connection. Even an insecure protocol works as long as the fingerprint is not recognized by the firewall.

underestimating the number of technical, financial and human resources the maintainers of the wall is willing to throw at you

That's why keeping a low-profile, small attack surface is way more important than a secure™ protocol. Maybe the protocol is easy to crack, but it only affects handful of users, so what? These users can generate a new protocol on the fly.

Let's come back to the Enigma machine

How is Enigma even relevant? Why do you always give analogy of Enigma, as if it's the only successful cipher attack story you can give?

So let's talk about this Enigma.

  1. The machine, nick named bombe, took Alan Turing a year to build. By the time you finish building clever ass Enigma cracker the existing protocol is already obsolete and the data transfer is already done.
  2. Alan can't do shit if the Polish didn't hand over several physical Enigma machines to the Brits and already cracked early versions.
  3. Enigma is overrated. It's mostly used by German Navy U-Boats. And U-boats is no longer a threat by the time the bombe gets ready.
  4. Do you know the German Army used a different type of cipher machine? What have your Alan boi done to crack it?

so sooner or later, a commonality will emerge. The more sets of parameters you generate

Suppose the protocol generator is a superset of vmess/vmess2/vless/whatever, the mutual destruction is guaranteed. LMAO.

@henrypijames
Copy link

@est You keep describing a solution path that is impossible both on philosophical (by which I mean widely accepted principles of information theory) and practical grounds (by which I mean never has something in this direction worked ever). But I no longer believe I can convince you, and for others I've made my argument as well as I can, so I will stop.

@henrypijames
Copy link

henrypijames commented Jun 5, 2020

Now some constructive inputs (for a change): I, like many others, strongly supports a modular approach - in principle. But in practice, I recognize that being modular carries a cost in terms of complexity, performance, and mostly importantly in this case, integrity (security) as well. As a pragmatist, I support reducing complexity if there is significant gains to be made - as in the case of WireGuard.

Our main focus currently is obfuscation. Again, in principle, a plugable obfuscation layer is ideal. But the very nature of obfuscation dictates that in order for it to work, it has to be fitted (and preferably, even custom-designed) to the layers inside and outside (or above and below) of it. For example, you could do Vmess-ObfX-TLS, or Vmess-TLS-ObfY, and both schemes may have their advantages, but it is very hard to imagine using the same obfuscation layer for both ObfX and ObfY.

So, despite my principle preference for modular design, I can very well imagine - and would very much understand - if in the end, full plugability is not possible (practical), and we end up with one or two or three constellations of transport+integrity+security+obfuscation+whatever, and they're well fitted to each other, and are not arbitrarily replaceable.

@est
Copy link

est commented Jun 6, 2020

@henrypijames I think you and have a common ground in the beginning, but you are just extremely pessimistic. But still, I highly think capsulate everything inside TLS is a dead end. It won't be any better than the naiveproxy project.

@nametoolong
Copy link

Let's come back to the Enigma machine.

Why isn't there anyone thinking of using Enigma machine to circumvent the GFW? I did it. It is seriously broken, but it still works.

@henrypijames
Copy link

henrypijames commented Jun 6, 2020

I am not a member of the "let's all move to TLS and be done with it" camp, either. Yes, TLS is the best of what it is designed to do, but what we're trying to do is somewhat different from that. Fixing on TLS a priori and ruling out any alternative is too much blind faith for my taste. But it is possible that, after having considered and tried other alternatives and having failed to find a better alternative, we come back to TLS a posteriori. I would have no problem with that. I understand some people are arguing right now that we have already reached that point - that all the "proprietary" protocols of WS and V2 have failed. I'm not convinced of that, because - WireGuard. WG is a success story of going around TLS for the right reason and getting it to work. Now, I doubt V2 will ever reach the level of WG in terms of overall quality of design and implementation, but I'm not willing to give up and simply "settle" with TLS just yet.

@proletarius101
Copy link

I'm not convinced of that, because - WireGuard. WG is a success story of going around TLS for the right reason and getting it to work. Now, I doubt V2 will ever reach the level of WG in terms of overall quality of design and implementation, but I'm not willing to give up and simply "settle" with TLS just yet.

TLS is mainly for obfuscation - hiding from the GFW. WG is well crafted in terms of efficiency and security, but we are talking about the quality of circumstance. Probably vmess is a more appropriate competitor. WG considers nothing about circumstance which is why it could be built from scratch without using existing http stack, whereas circumstance is all about disguising (before we find a even better solution which is beyond the scope of the architecture of our web).

@ghost
Copy link

ghost commented Jun 7, 2020

circumstance is all about disguising (before we find a even better solution which is beyond the scope of the architecture of our web)

Totally agree with it. At least for transport protocol, it should aim to disguising. Problem here is disguising with what? White noise? TLS ? HTTP? Another VPN protocol? There's so much place for transport protocol.

I disagree with "new protocol SHOULD build over TLS, and rely on security provided by TLS", do you remember TLS MITM by Kazakhstan (net4people/bbs#6)?

@proletarius101
Copy link

do you remember TLS MITM by Kazakhstan (net4people/bbs#6)?

AFAIK it was like CNNIC's scandal. Technically we have to believe those CAs (and it's more about authenticity than confidentiality, although MITM bridges them together). Confidentiality is guaranteed by vmess or whatever protocol encapsulated. By far they works well. And I agree that they could be better serving this purpose.

Problem here is disguising with what? White noise? TLS ? HTTP? Another VPN protocol? There's so much place for transport protocol.

Yeah, that's a good point. Obviously we want to disguise as innocent traffic. It can't be another VPN. It could be random sequences unless the gov starts white-listing policy (I can see such signal already). 

Under the white-listing assumption, we have to disguise a existing protocol. The most widely used protocol is http (over TLS). It probably won't be blocked by the gov before it completely close the door. I can't think of another protocol w/ such characteristic.

@proletarius101
Copy link

proletarius101 commented Jun 7, 2020

do you remember TLS MITM by Kazakhstan (net4people/bbs#6)?

AFAIK it was like CNNIC's scandal. Technically we have to believe those CAs (and it's more about authenticity than confidentiality, although MITM bridges them together). Confidentiality is guaranteed by vmess or whatever protocol encapsulated. By far they works well. And I agree that they could be better serving this purpose.

Problem here is disguising with what? White noise? TLS ? HTTP? Another VPN protocol? There's so much place for transport protocol.

Yeah, that's a good point. Obviously we want to disguise as innocent traffic. It can't be another VPN. It could be random sequences unless the gov starts white-listing policy (I can see such signal already).

Under the white-listing assumption, we have to disguise a existing protocol. The most widely used protocol is http (over TLS). It probably won't be blocked by the gov before it completely closes the door. I can't think of another protocol w/ such characteristic.

@henrypijames
Copy link

henrypijames commented Jun 7, 2020

I can't think of another protocol w/ such characteristic.

WeChat video call (and perhaps audio call, too). It's probably the next best ubiquitous type of traffic that has a high data volume.

@proletarius101
Copy link

proletarius101 commented Jun 7, 2020

WeChat video call (and perhaps audio call, too).

If that's what was implemented in v2ray, yeah, it's over udp and is different from http over tls. But it's quite easy to block as long as we know the ip address range of wechat video servers. At least it has been blocked now.

Anyway, udp or whatever non-http stacks do have some potentials. Feel free to list and inspect them one by one.

@henrypijames
Copy link

henrypijames commented Jun 7, 2020

But it's quite easy to block as long as we know the ip address range of wechat video servers.

I always thought video calls are p2p if a direct connection is possible - you sure they're always relayed by servers?

But anyway, thanks to the Coronavirus, we now have a proliferation of video conferencing software - and an exponential growth of their usage. If any of them allow a p2p link, it would be a candidate for us.

Although, as the Parrot paper has made clear, imitating VoIP behaviorally (and not just protocol-wise) is very hard, so presumably imitating video conferencing won't be easy, either.

@proletarius101
Copy link

If any of them allow a p2p link, it would be a candidate for us.

Well, it's quite hard to implement p2p video conferencing... Most products, including open source ones, are using bridging and cascading architecture (see https://webrtchacks.com/sfu-cascading/jitsi-meet-architecture/). I'm not familiar with WeChat's implementation, but I believe Zoom and Jitsi are not p2p. Actually they are trying to achieve p2p encryption but the transmission is still centralized.

@henrypijames
Copy link

It doesn't have to mandatory p2p - which can't work in the real world (you have too many instances where both ends are behind a NAT). The protocol only have to have optional p2p link in case of suitable network topology, and that link could be what we imitate.

@proletarius101
Copy link

FYI, ORAM is a technique used to hide the input/output behavior and preserve the algorithm. It can be used to mitigate traffic probing.

@henrypijames
Copy link

FYI, ORAM is a technique used to hide the input/output behavior and preserve the algorithm. It can be used to mitigate traffic probing.

Don't think that works. We need more than hide - we need to hide behind (or into) something else. Being unidentifiable is good for others, but not for us: The wall can simply consider anything unidentifiable to be suspect (and to some extent, already does).

@proletarius101
Copy link

proletarius101 commented Jun 10, 2020

The wall can simply consider anything unidentifiable to be suspect (and to some extent, already does).

Hmm... Of course I'm talking about the protocol encapsulated in HTTP and TLS. By far vmess is struggling in being detected even if the cover is innocent. One reason that it's possible to do passive probing is that vmess doesn't do anything about the real packets. So periodically there will be small packets with handshakes in it.

@yangbowen
Copy link

Inspired by the above discussion, I'm thinking whether it would be practical that:

  1. The "disguising" layer lies below the encryption and authentication, and is pluggable.
  2. The specific characteristics to be mimicked get some degree of customization.
  3. And perhaps using some mechanisms to automatically "train" the disguiser with legitimate communication captured in transmission.
    For example, maybe the principles in some sort of lossless compression algorithm could be adapted to deduce an "entropy characteristics" (implying which parts could be plaintext header and which parts could be high-entropy payload) of legitimate communication, then specifically disguise into that characteristics, presumably by duplicating the low-entropy "plaintext header" and substituting the high-entropy "payload" with the pseudorandom bitstream from upper layers.
    Highly popular Internet services and protocols, like the forementioned WeChat video call, and TLS of course, probably deserves manually-crafted disguising. But perhaps it would be viable to use this automated "training" mechanism in mimicking the vastly numbered "not-so-popular protocols", that GFW developers presumably won't bother manually analyzing either?

@yangbowen
Copy link

One thing I agree with @studentmain , it's more about steganography, not cryptography.

@cxzlw
Copy link

cxzlw commented Oct 6, 2020

建议协议应该设计成全程可配置的。每个人安装之后随机生成一套协议。

协议多,会降低微观协议的破解难度,但是会极大地增加GFW研究人员获得的KPI的难度。被封。。。。

要站立互联网管道中间黑盒子供应商的角度来思考某种协议是最讨厌的。一种思路是加防御的buff(设计精妙绝伦难以识别的流量),另一种思路是降低命中的debuff (每一次命中都只能打击部分无足轻重的小目标)。

还有一个好处是,协议可以随时换。你今天在盒子里上了套流量分类规则,明天别人流量特征就变了。。。久而久之,就会扩大非技术官僚和技术基层骨干之间的不信任程度。让破解协议的行为成为重劳动低回报的职位,这样就没人有人去干这种破事了。

当然,这一切原因是基于一个假设,那就是协议能被随机组合出来并且进行高效运作,并且找到通杀的识别方法

But how many people would want to destroy their own special servers in order to destroy the enthusiasm of GFW's existing staff? Generally, self-construction won't do this, and airport owners don't want to do this. Unless you can provide a completely reliable stochastic algorithm

@github-actions
Copy link

github-actions bot commented Jan 4, 2021

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days

@github-actions github-actions bot added the Stale label Jan 4, 2021
@github-actions github-actions bot closed this as completed Jan 9, 2021
@Mistofs

This comment has been minimized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests