So far, a total of 1,301,153,248,512,000 or 1.3 quadrillion different letter variations have been tested: All of those variations have been checked (each single one) against a dictionary of ~4,500 words. Each word on each possible position. Out of those 1,3 quadrillion variations, only six (!) potential results have occurred ("At least one word of length 5 or longer found in each of the three strings").
Still aware of the total size of Z340?
26^63 or 1.39e+89 or 139,098,011,710,742,195,590,974,259,094,800,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 theoretically existing variations (63 homophones). Chess has approximately 10e+120 variations ('Shannon number'), which is significantly greater. And, obviously, the Hardens were able to solve a 2.56e+76 cipher. Thus, 1.39e+89 should - still - be in some computational spheres..
Thus, the complexity of the Z340 cipher can be placed somewhere between the Z408 and chess; closer to the Harden's, I'd say.
Considering the cipher structure itself:
- identical homophones in different strings
- repeating bigrams etc
is 'key': The English language dramatically reduces the huge number above. How much? We do not know because this mostly depends on the linguistic patterns of English/American language (dictionary). For example, the word QQQQQ does not exist; thus all of it's related combinations are excluded, not even computed.
The TASK is to compute as many variations as possible by simultaneously considering the English language with the cipher structure. In fact, this currently happens in some kind of 'short version' of 26^17 different variations.
An almost philosophical (cryptoanalytical) question arises:
Will the total potential of the cipher's encryption method be reduced strongly enough by the English language with regard to its cipher structure to make it computable in a specific time period?
In our case, the latter ('computable') is limited to a setting of 26^17, thus computing 17 letters. I simply do not have any faster computer available than the one in front of me. The unexpected answer, however, is YES: This can be seen during the concurrent computing progress as the program 'skips' to the next bigram section, practically.
At least, when focussing on common bigrams.
If the pre-set is good, the cipher 'should' be cracked over the next 12 months. HOWEVER: It makes a huge difference if the computation runs with the correct set-up, e.g. 30 frequent bigrams (30^3=27,000) or all potentially existing bigram combinations (676^3=308,915,776 - three repeating bigrams are considered). Using all bigrams slows down the computation progress by the factor 1:11,441. Eleven thousand years of computation instead of 12 months. And this just because using all bigrams instead of only the common ones. This somehow shows the 'sensitivity' of this cracking process.
At least, there is now a way to 'cover' multiple quadrillions of the most likely letter variations. From now on, the question is not 'if' but 'how long' it will take to crack the Z340.
For me, this is comfortable: Besides smaller modifications regarding eg. the bigrams used, the program is working perfectly fine. All I can do is wait and watch.
QT
- Cryptophilosopher -
