Monero Monitor

Episode 14: John Tromp on Cuckoo Cycle & John McAfee on Monero

26 Sep 20170 Comments

This week's episode is a two-parter! First up, I talk with John Tromp, inventor of the Cuckoo Cycle Proof of Work algorithm. We discuss what it means to be an egalitarian proof of work and how Cuckoo Cycle targets consumer hardware rather than industrial ASICs. We talk about the puzzle being solved by the algorithm itself, and also discuss what types of performance we should expect with optimized CPU and GPU mining software. After that, I'm very excited to share a short conversation I had with John McAfee about Monero. He's been mentioning Monero on a number of TV networks lately, and we talk about why he likes the project.

John Tromp's blog is tromp.github.io. The Cuckoo Cycle project can be found at github.com/tromp/cuckoo. An interesting article on Cuckoo Cycle is on cryptorials.io.

Did you enjoy this episode? Please show your support by donating today.

47SMPWHYcn5AfKzYF3c6cX94dCNcijKfdMgkCBTEmMtG4q9u9GdN7YdAmP777hfkmg52xntkDyxwh3nKcdGa2eApFAe5oZx

Podcast Transcript:


~~ { Introductory Clip and Music } ~~

Mike: Hey, everyone. This is Mike and you’re listening to the Monero Monitor. For today’s show I’m really excited to share with you all first a two-part episode with conversations with two very different, but equally interesting people. First up, I’m gonna talk with John Tromp, the inventor of an alternative proof-of-work called Cuckoo Cycle. Then, I’m super-excited to share with you all a short conversation I was lucky enough to have a few days ago with antivirus and libertarian legend, John McAfee. I hope you all enjoy the show and thanks for tuning in. 

Interview with John Tromp

Mike: Thanks for joining me on the show, John. 

John:  You’re welcome, I’m glad to be here.

Mike: Okay, so, John, before we get to the interview, I think it’s important that I kind of fill the listeners in real quick on why I invited you to this show in the first place. So, is it all right if I spend a minute just talking without you giving much input?

John:  Oh, yeah, please, go ahead.

Mike: Okay. So, John is the inventor I believe is the correct definition of a proof-of-work called Cuckoo Cycle. And, that in of itself is interesting, but maybe not necessarily relevant to Monero, except that about a month and a half ago, a developer that goes by the name shaolinfry posted an issue on the Monero Research Labs GitHub repository titled “Time for a serious look at proof-of-work change.” And, shaolinfry is not really a regular contributor to Monero, I think until recently, he was one of the developers working on Litecoin, but he’s probably, honestly more known for being kind of the brains, the guy who pushed the idea of the UASF that happened in Bitcoin earlier this year. 

And, because of that, I think there’s probably a little bit of distrust among some people in the community towards really people on both sides of the UASF movement, people who have kind of joined one side or the other tend to be quite polarizing and then the Bitcoin scaling debate in general, and then also, probably, perhaps part of the fact that just like a proof-of-work change for Monero, that’s a big deal. But anyway, this issue developed quite a lot of chatter, there were dozens and dozens of replies to it and going back and forth among a lot of different Monero contributors, myself included. 

And then, kind of in the last couple of weeks, the discussion has fizzled out a little bit, but I do think there was a good, larger point that was raised, one that I find to be kind of valid and that’s that Monero relies on something called CryptoNight which is a proof-of-work that we inherited for lack of a better term, from Bytecoin which is what we forked away from. And, it’s had some peer-review and there have been some people, including in the Monero Research Lab that have investigated it, but perhaps the peer review hasn’t been that ideal. Also, it suffers from slow verification speeds, and while it’s proven to be a fairly workable egalitarian proof-of-work so far, which basically means that CPUs and GPUs both can mine Monero, and in theory, if an ASIC were ever to be developed, then it probably would not have very radical efficiency gains. 

But while that’s proven to be true so far, there’s not necessarily a 100% trust that that will continue to be. And so, the conversation then led to a lots of different suggestions, but one of them was an alternative proof-of-work, and that being Cuckoo Cycle, which leads us to today’s conversation. And so, I want to be clear, I’m not doing this interview because I think we definitely should switch, I’m not doing it because I endorse John, but I do think that John has done some good work and I thought it would be a great kind of fact-finding adventure to fill in the gaps on everybody’s understanding of Cuckoo Cycle, my own included. 


QS 04:25 Mike: So, that’s kind of the frame for today’s episode. But before we get to that, John, I always find it kind of fun to start off getting to know the guests, especially since you’re maybe not as known in the Monero community. Would you mind telling us a little bit about yourself? 

John:  Sure. I am a Dutch computer scientist. I did my studies at the University of Amsterdam, and in the last years I switched to the Morse theory side of computing. So, my thesis was about algorithms and complexity. It was not that focused really, it was really about a lot of different topics that I stapled together. So, I’ve been researching distributed computing, computational biology, one topic I like very much was called Algorithmic Information Theory which is about the ultimate limits of compression, how concisely can you describe things. And I’m also a big fan of puzzles and games, so I try to incorporate that into my research.

Mike: Yeah, I saw. I saw you’re a big fan of the game Go, can you tell about that a little bit? 

John:  Oh, yeah. I used to be a chess player, but as of, I think I was 24 when I got introduced to the game of Go, there was somebody at the Centre for Mathematics and Computer Science when I was doing my PhD who was a strong Dutch player and we used to play on a small board, 9x9 during lunch, so it’s easy to play there, you can finish a game in just half an hour easily or multiple games… 

So, that appealed to me a lot because of the extreme simplicity of the rules combined with the deep strategy that emerged from those rules. So, that also fits very well with a mathematical approach because you can directly translate the Go concepts into simple mathematical formulations. And so, closely related to (??? 06:43 ) theory. So, then I also wanted to streamline the rules of Go as much as possible and I worked a while on getting the most concise formulation of the rules of Go. And that’s something you can find on my webpage homepage as well. 

And then I’m also interested in the combinatorics of things, like how many positions are there, how many games are possible, how long can a game of Go be… So, yeah, that was quite an interesting topic of research that I did together with a guy called Gunner Farnebäck. And it’s been a life-long goal of mine to attempt to count the number of possible Go positions; it turns out to be quite a big computational challenge. So, that work I spent like a decade of research and only recently was I able to gather enough computing power to complete that count, it was actually published January of last year, so I was very happy to achieve that. 

Mike: Yeah, that’s really cool. I have a little bit of an academic background myself and being able to combine your kind of hobbies and interests with your explicit research is really cool. 

John:  Yeah, I’ve been very lucky to be able to do that.

QS 08:05 Mike: So, what about Go was able to tear you away from chess and what makes you keep wanting to go back to it? Is it the lack of a lot of research that’s gone into it or is it something else? 

John:  Just simply that I was the first to seriously want to compute these properties of Go and I very much enjoyed the programming aspect of it as well, writing all the software that was able to do the counting, I had different collaborators on that, there was a guy that was visiting CWI 08:40, Michal Koucký, from Czechoslovakia, and he helped improve something when I was running out of enough memory to do the computations and he helped show how you can also just make it hard disk code-based, and so it actually becomes an interesting application of what Google introduced as the map-reduced paradigm, when you keep on processing a large set of items and you transform each one into a bunch of new items and then you have some way of combing these items. 

So, this turned out to be an interesting application of that framework. And it also connects with my other research on how to best formulate the rules of Go. I could better argue why these rules improve on existing ones and that’s that the mathematical formulations, just become much nicer. There’s a famous rule in Go about how to prevent infinite games. It’s called the ko rule. The basic ko rule, that’s not quite succeed in that, but there is a super ko rule that prevents you from repeating earlier positions and there are some variants about that. And you can very nicely see in the mathematical theory how these different flavors of the super ko rule make statements either nicer or uglier. So, that also helped me to justify a partial instance of the ko rule. 

QS 10:18 Mike: Very cool. Okay, so you sound like you basically dedicated your whole life to mathematics and computer science. I can kind of begin to imagine then how you might’ve stumbled upon Bitcoin, but how long have you been involved in the Bitcoin community? 

John:  My own memory of that is a little vague. I looked back myself to my initial postings on BitcoinTalk forum and I can see I was getting involved in some proposal by a certain Daniel Larimer who was introducing the Momentum proof-of-work and that’s also basically the origin of Cuckoo Cycle. It was initially I think trying to be an improvement of that idea. In Momentum, you just try to have a proof-of-work corresponding to the basic birthday paradox, to find two things, two inputs that have the same output under a hash function. 

Mike: Yeah. And so, remind me, the birthday paradox is basically that while no two people are very likely to have the same birthday, if you have enough people, it’s likely that two of those people have the same birthday, is that right? 

John:  Exactly. So, in order to guarantee that at least two people in the group have the same birthday, you would need more people than the number of birthdays, you would need 367 people, but just taking a random group of 23 people is already more than 50% likely that there is a pair of people in the group that share the same birthday. 

Mike: Yeah. I remember we talked about this way back when I was in high school, I think it was my algebra teacher, he was telling us this and he was trying to teach us, I don’t know, something to do with probability, and so we had 31 people in the classroom, plus him, 32 people.

John: Okay, time for a check!

Mike: He did a check of the birthday problem and with each person, as we went through the room, it was like: “Well, that didn’t match, and that didn’t match.” And finally we got through all 32 and nobody had the same birthday and he just didn’t know what to do because in 40 years of teaching, that had never happened.

John:  Yeah, but like I said, the odds are better than even, the odds are still far from guaranteed. It’s like when you get to 50 people, then it becomes quite likely, much greater than 90%.


QS 12:35 Mike: Yeah. So, let’s dive in a little bit more about Cuckoo Cycle. What motivated you then to set up designing a brand new proof-of-work algorithm? Was it just kind of something that was challenging and so therefore you found it rewarding or was it something more than that? 

John:  Unfortunately, I don’t remember quite how I went about designing it or what my motivations were. I do know that I had prior experience with this concept of a Cuckoo hash table which is a very interesting data structure. And I guess I somehow combined it with my interest in this Momentum proof-of-work proposal to try to improve on it. And in particular, to better force you to use a lot of memory in solving it. Basically, to reduce the landscape of applicable algorithms because that is really a basic problem with the birthday paradox, there are so many different ways you can go about trying to solve it. That makes it hard to really be sure that you’ve identified the most efficient method.

QS 14:10 Mike: Yeah. Okay. So, I saw in your whitepaper, I haven’t read the whole thing but I’ve read maybe the first half of it, but one of the motivations you listed was that you wanted it to be more egalitarian, and so that sounds a lot like what people hope CryptoNight is for Monero. And kind of what you then mean by that is just reduce those performance gaps between consumer hardware and specialty hardware. Now, without getting into the specifics of the exact problem that we’re solving, we can talk about that later, but what are the types of things that you want to do if your goal is to design an egalitarian proof-of-work? What are the design decisions you might need to make in order to accomplish that?

John:  For that you would want to appeal to the properties of hardware that is the least different between specialty and commodity hardware. And if you look at something like Moore’s law, you see there is an enormous increase in computing power every generation, every 18 months, but while for instance memory bandwidth is growing along at a fast pace, you also see that property like memory latency is not being advanced much even over the course of a decade, you are barely seeing a factor of 2 improvement. So, yeah, I wanted to create a proof-of-work that was really going to be latency bound. But of course, in the end, I did not succeed in doing that. 

But for a long time, I listed that as an aim and as motivation, but it was also kind of an after-effect. When I wrote my initial solving algorithm, I noticed that it used all these completely random memory accesses, so that made me state that it was motivated by being latency bound. So, yeah, for a long time, I was under the impression that it was inherently latency bound, but this year, it turned out there is an alternative method to do those memory accesses, basically sorting them all, that changes the problem into a bandwidth bound one. 

Mike: Yeah, I had a question about that later, but you sort of already started talking about it. You had, I guess it was like a bounty that you posted on your GitHub repository for the mining software and you basically said, “If anybody can beat these metrics, then I’ll give you some amount of money.” I think it was $5,000 or something. And yeah, there was a developer here recently who, my understanding is that he increased the memory requirements by like an order of magnitude or something.

John:  The CPU bounty didn’t have any constraint on it for memory use. So, I said, “If you can speed it up by a factor of 2, it doesn’t matter how much memory you use, then I will pay you, I think at the time it was $2,500 dollar bounty. 

Mike: Okay.

John:  That was $2,500 dollars for a doubling. And then if you managed to quadruple it, it’s like doubling it twice, so then I would pay $5,000 dollars for that. That is what Xenocat managed to do, just roughly quadrupling the performance. 

QS 17:17 Mike: Yeah. So, now with all that, what does that then mean for… You know, at one point I think that CPUs and GPUs sort of had similar performance, is that still the case or is now CPU mining… Will it be obsolete as soon as efficient GPU miners are developed that include this new algorithm? 

John:  I think CPUs may have already been somewhat obsolete before, for the latency solver because the GPU solver I wrote is already four times faster than a high-end CPU and I wasn’t even sure if I had exhausted the possible optimizations on the GPU side since I’m not that much of an expert GPU coder. So, that was already four times, and now, because the new bandwidth bound algorithm is going to play much more into the strengths of the GPU, I expect the gap will grow to at least an order of magnitude, at least a factor of 10. 

And in fact, because the new solver shares so many characteristics with the solvers for the Equihash proof-of-work as introduced in Zcash, we can expect the performance gap to be very similar to what you see there. Which is I think about a factor of 20. 

QS 18:40 Mike: So, then along the same lines then, with these new improvements making it bandwidth bound instead of latency bound, does that have any impact on whether somebody could go and develop an efficient ASIC that would be able to just put everything else to bed?

John:  You could develop an ASIC of course, but the ASIC would probably still need to connect to regular DRAM memory chips, it’s somewhat inconceivable that you would develop a whole new DRAM architecture. 

Mike: Okay. So it wouldn’t be the case with like in Bitcoin where somebody was able to develop just really efficient silicon and be able to just blow GPUs out of the water and also do it in a way where nobody else would have the capabilities of building something similar. With this, because it’s so dependent on DRAM, it would be something that while you can make ASICS, perhaps more groups would be able to do that and it would be more… The costs would be more even across the playing field, is that about right? 

John:  Yeah, that’s the big difference between compute-bound and memory-bound proof-of-work. With the compute-bound one, you want to just cram as many little computing agents on the chip as possible, that use as little power as possible, but for memory, it is a quite specialized implementation technology. You actually have a much higher density than on a normal compute chip. Also, most of the chip die is actually not active all the time which allows the density. In a normal computing chip you have to space things far apart so that they can dissipate all the power and that is a different concern for memory chips.

The basic problem with memory chips is you want to optimize for cost per storage. So we already have memory technologies that are noticeably faster than DRAM chips, but they also come at a much higher price, that’s namely the static ram. So, if something is 10 times faster, but it’s 100 more times more expensive, then nobody is gonna use that in his mining rig. 

And that’s why I think you will be able to build a custom memory chip that is slightly more optimized for Cuckoo Cycle, but it will not be price competitive in the least, and especially now with the bandwidth bound algorithm, you will actually be doing memory operations that play into the strengths of DRAM and mostly sequential access, and there is not much left to optimize about that memory chip. That is what the worldwide DRAM commodity market has optimized pretty low.

Mike: So, even though latency bound is not really a property anymore, that general property of something that perhaps still might be egalitarian might still be there just because of the hardware that’s needed in order to build any type of miner. 

John:  I think you will be able to build an ASIC for just the computing part, the part that computes all these little hashes, and that generates all the memory requests, that part you can optimize. But the idea of Cuckoo Cycle is that it’s gonna be quite a cheap part, it doesn’t have to be as optimized as possible, it only needs to be able to max out the memory bandwidth and that I think can be quite a tiny chip that doesn’t have to cost much, and then the majority of the cost will still be in the memory chips. 

And also, the majority of the power consumption will be the power used by the memory chips, not by the little ASIC. So then there is not much advantage of this custom ASIC solution compared to just hooking up some multi-core CPU to the memory chips that is also able to max it out.

Mike: Yeah. It sounds to me maybe you can use an ASIC to really optimize that compute part, but that’s not really gonna be all that much of an efficiency gain just because it’s such a small part of the total algorithm, is that right?

John:  Yeah. You get limited performance gain that is not going to be offset by your increased cost of development and production of this chip solution.


QS 23:54 Mike: Okay, awesome. So, now I don’t pretend to have the mathematics or computer science background that you have, but I am engineer and so I like to kind of get into the weeds a little bit on some stuff. And so, the actual design of Cuckoo Cycle kind of interests me. I saw you in your paper describe it as a graph-theoretic proof-of-work and you contrasted it to Adam Back’s Hashcash style proof-of-work. Hashcash is basically the underlying technology behind almost every proof-of-work out there. You pointed out a few exceptions to that rule such as Primecoin’s number-theoretic proof-of-work. 

Is it easy to talk about kind of like the overall differences between what makes something be Hashcash versus what makes something be number-theoretic or is that just kind of like a detail that is interesting to a computer scientist, but actually isn’t all that interesting in the proof-of-work itself? 

John:  Yeah, almost every proof-of-work you know is basically just a form of Hashcash. People think there is a big difference in Bitcoin’s proof-of-work and Litecoin’s proof-of-work, but they are just different flavors of Hashcash. The idea of Hashcash is that you will need to find an input to a hash function that produces a small output. Then you can get different choices of Hashcash depending on your choice of hash function. 

Bitcoin chooses the double application of SHA-256, and Litecoin choses an application of an Scrypt hash function in an attempt to make it memory-hungry. But of course, it’s only using 128 kilobytes, which is not that memory-intensive. It had to do so because it wanted to keep the cost of verification cheap. 

QS 25:08 Mike: Yeah, is that a trade-off that all Hashcash functions have to make? Because I know that one problem with Monero’s proof-of-work, it’s also a Hashcash style proof-of-work, is that it…

John:  Exactly. You cannot really afford to make a Hashcash proof-of-work very memory-intensive because everybody verifying the proof-of-work has to use the same large memory resources which is going to be time-consuming and it’s going to be difficult for certain devices like simple mobile phones with limited memory, or even embedded devices. So, yeah, that’s why it’s much better to try to have an asymmetric proof-of-work. Where efficient attempt of finding the proof requires a lot of memory use, but verification does not require any memory. 

QS 25:54 Mike: Okay. Can we talk specifically about how Cuckoo Cycle kind of accomplishes that? I saw there is a blog post that tried to describe it in kind of layman’s terms, and basically they described Cuckoo Cycle as a way where if you imagine having billions of cities, and then you randomly picked 42 roads, if you had a road between every different city and you randomly picked 42 of those roads, you would be looking for a cycle, so you would be looking for a loop that went through 42 different cities and connected them…

John:  There is no random picking of 42 roads, it’s just, you have this huge graph of which the nodes are divided into two groups. In graph theory we call that a bi-partite graph, so the nodes are partitioned into two groups and edges are only between the two groups, not within a group. And the question is, is there a cycle of a certain length in this large random graph?

The sense in which the graph is random is you fix the number of edges and for edge number “I”, you choose the end-points by applying some hash function to give you the end points. So, you feed “I” together with bit 0 into the hash function, that gives you a random output that you map to a node on one side, and then you feed “I” combined with the bit of 1 into the hash function and that gives you another random output and that denotes the other end point of the edge, in the other group.

So, in this way, you can produce a fixed number of pseudo-random edges and that defines your random graph, and then you can ask if there is a cycle of a certain length in this graph and how can we find it? The trick is that in order to check that somebody found a graph, you don’t need any memory at all, because the person will just give you the list of, let’s say 42 edge indices and you can just map these to the 84 end points as a simple S-function and then you just check that these are all connected in a cycle. So, that can be done practically instantly without using any memory, that is what gives you the super-fast verification of Cuckoo Cycle. But in order to find a cycle, that is going to take a lot of memory to do efficiently. 

Mike: So, in Hashcash, the way that then you’re able to regulate the difficulty, which is what keeps Bitcoin’s block time at roughly 10 minutes is they keep lowering the bar, you have to find a smaller and smaller number…

John:  Exactly. There’s is a nice relationship between the difficulty of finding a Hashcash proof and this threshold on the output of the hash fiction. If you make the threshold twice as large, then it’s as twice as easy to find the proof-of-work. So, basically we borrowed this feature of Hashcash in order to implement Cuckoo Cycle proof-of-work. So, once you found your cycle, then you simply apply a hash function on the list of all the 42 indices, and then you still require that output again to be below a threshold. It’s really a Cuckoo Cycle proof-of-work nested within a Hashcash in order to do the nice difficulty adjustments. 


QS 30:11 Mike: Okay. And so then, I’ve always heard this rumor or people always talk about how Cuckoo Cycle is not an easy thing to implement for pools and the reason why would be that pools kind of take advantage of that Hashcash bar that you need to meet and they say that, “Okay, if you don’t quite meet the bar, but if you meet a slightly higher bar…” At least, this is my understanding of how pool mining works. “If you meet a slightly higher bar, then we’ll accept that as a share towards solving for the pool’s proof-of-work because you’re showing that you did work, you just didn’t quite do the optimum work that we needed.” Does that mean then that Cuckoo Cycle actually can be pool-mined because of the same exact property?

John:  What you need for the pool is difficulty of coming up with a share to be sufficiently lower than the difficulty of coming up with an actual solution. With Cuckoo Cycle it takes quite a bit of effort to even do one proof attempt. It used to be with the latency bound miner that this could be counted in dozens of seconds, but now with new bandwidth-bound miner, it looks like we’ll be able to generate solutions on a GPU in less than a second. So, that means that if your block interval is counted in minutes and there are enough miners worldwide, then there will be perhaps millions of Cuckoo Cycle attempts for one block interval and that means there’s plenty of room to define a share. 

For instance, the simplest possible share would just be any 42 Cycle that is found, regardless of this Hashcash constraint on top of it. And if the Hashcash difficulty is at least, let’s say, 100, then finding any 42 Cycle will be a 100 times easier than finding a block that actually is accepted. I think that’s quite enough room to do shares. I expect actually that the difficulty will climb well beyond the 100, it will be closer to a million than a hundred. So, then probably you will just be able to design shares as you do in Bitcoin, as something that has at least 1% or maybe a 1/10th of a percent of the actual difficulty. 

QS 33:14 Mike: It sounds then like another maybe important design consideration you would want to take into account… You said it’s about a second on a GPU now to go through a cycle, that would probably mean then you would want to design whatever block…

If somebody was implementing a new blockchain, so for instance, I’ll point out, for anybody who’s heard of the Mimblewimble protocol, there’s a project called Grin that is attempting to be the first implementation of Mimblewimble, and they, I believe, are implementing Cuckoo Cycle. And actually I’ve been reading quite a bit about how they’ve been doing that in order to get prepared for this interview.

But it would seem to me then that you would want to make a conscious decision about what your block time was so that people don’t regularly end up like half-way through a Cuckoo Cycle which takes, a second sounds like a small amount of time, but that’s still a meaningful amount of time in the span of like if you had a 15-second block, that’s 1/15th of the time or something, or 30-second block, it’s 1/30th. 

So, would you say then, if somebody was implementing Cuckoo Cycle, they would want to be really careful about what they choose as their block time, and then, what do you think would be a reasonable, smallest time to choose? Because for instance, with Monero, we have 2-minute block times and so, is that a long-enough time where you don’t have to worry about lots of people wasting compute cycles because of how long the computation takes, or would you maybe want to go to 4 minutes? Do you have any inference on what might be the design there? 

John:  Yeah, this discussion also came up when Zcash was choosing their proof-of-work and it looked like Equihash was gonna take quite a number of seconds for a proof attempt because they still hadn’t optimized the solver at all and actually, the original Equihash authors were suggesting to use a set of parameters that they found was gonna take 30 seconds and that scared them because they were using a 2 and a half minute block interval. 

And yeah, 1/5th of a block interval for a single proof attempt is very uncomfortable, as you will agree, so that made them actually decide to switch parameters even before they had fully optimized implementations. And with the revised parameters, it turns out you can do a proof attempt in a tiny, tiny fraction of a second and actually now people are doing like 600 attempts per second on GPUs, now it seems overly conservative.

With Cuckoo Cycle, we have a much better idea of how well an optimized implementation will perform. It will be about the order of a second on a GPU and I think when your block time is at least a minute, that gives you a comfortable margin. You will be wasting less than a percent on average of computing resources because if somebody finds a block midway, you’re maybe… Midway, your 60th proof-attempt, you’re looking at a compute waste in the order of 1% which I would think is just acceptable.

Mike: Okay. And if somebody didn’t think that that was good, then you could double the time or quadruple the block time or whatever, and that would help them?

John:  In that case, you would adjust the difficulty of a proof attempt by making the graph half as small.

Mike: Oh, interesting.

John:  Yeah. Basically the amount of memory and the time used by Cuckoo Cycle is directly related to the size of the graph which increases in steps of 2. So, my recommended size is 1 billion nodes which I call Cuckoo 30 because it is 2^30 nodes. And it’s also nice because with the bandwidth-bound algorithm, that would take about 3.3 gigabytes of memory to solve, which is of course in the sweet spot for GPUs, almost every GPU has at least 4 gigabytes of memory and that’s why you wouldn’t want to raise the graph size to 2 billion because then you’d need at least 6 gigabytes or 8 gigabytes of GPU and you’re already shutting out a lot of possible miners.


QS 37:19 Mike: Right. So, then is the mining software today to the point now where… One of the things that came out of this discussion on GitHub was that if we did make a switch of proof-of-work, we would want to have miners that were already optimized. In other words, we would want to avoid what usually happens with a brand new coin where there is kind of like a little bit of an arms race from a software perspective and you know, certain people have better software code than others just because they’re able to develop that. So, somebody raised the point of saying, we would want to try and already have optimized software. Is Cuckoo Cycle at this point at that stage, or do you need help from developers to build out mining software? 

John:  It is not at that stage yet. Right now, we’re at a stage where I believe the CPU solver is pretty well-optimized and for that of course there is a bounty, currently at $8,000 if you can double the performance of the CPU miner and I will also pay out half the bounty if your improvement is only square root of 2. Of course, that’s not very tight, you might consider that something which still can be optimized by 25% is not very mature. And I would agree with that. But I have to set a minimum threshold for these bounties.

Mike: Right. And that’s more mature than something that could be, you know, 2X or 4X or 10X in its performance. 

John:  Yeah, but that is the CPU side. And of course, we expect most people to be mining on GPU and we still have to write and optimize the GPU solver. So, I just recently started work on implementing a CUDA version of the bandwidth solver. 

Mike: Okay, and that would be for the nVidia GPUs for anybody who’s not sure. 

John:  Exactly, yeah. But this is just gonna be a blueprint for a GPU solver because I don’t have the skills to code the best GPU implementation. So, that can then serve as the basis for other more experienced GPU coders to implement more optimizations. I don’t know who is going to do that. There are several projects that aim to use Cuckoo Cycle, including a project called Aeternity, that’s also planning to launch early next year. So, then I’d also want to sponsor some porting effort there. 

I would suggest if you want to have mature mining software available at the time of the switch, then you would have to wait until sometime in 2018 when Cuckoo Cycle is actually deployed in other cryptocurrencies and maybe people like, what’s he called, Claymore–

Mike: Claymore or Wolf, there’s a few of them.

John:  All these anonymous mining developers, they will probably apply their efforts to generate some developer fees on mining the new cryptocurrencies. 

Mike: I imagine also in addition to these projects that are gonna be coming in and you know, miners being developed for it, if the Monero developer community were to decide they wanted to switch to it, then perhaps there might be a bounty raised and stuff like that because there’s kind of precedents in the past for optimizing mining hardware by the Monero community raising funds. And so, I imagine then if we did want to make that switch sometime in the future and there were still optimizations to be made, the Monero community might be able to help make that happen, but that’s so far off in the future, who knows. 

John:  Yeah, depending on my own confidence in the optimization level of my CUDA miner, I will also of course be offering bounties for further improving its performance.


QS 40:55 Mike: Okay, I mentioned Grin, you mentioned Aeternity, are those kind of the two big projects that have already started working on implementing Cuckoo Cycle, is there anything else? 

John:  Well, if I hear about anything else, I will list it at the bottom of the project page. So, right now, there is one other project listed there that is basically just a Bitcoin proposal. Not working at the consensus level, but I think at the networking level to prevent certain denial of service attacks. Somebody proposed that they could use a simpler Cuckoo Cycle to accept peer connections under certain circumstances.

Mike: Okay. That kind of reminds me of the original motivation for Hashcash which was to make people kind of prove that they did something before they send you an email to try and prevent spam emails.

John:  Right. Originally, all these proof-of-works came about in trying to fight spam. And there’s also somebody who is trying to build a decentralized discussion forum, basically a decentralized Reddit, and he’s also interested in using Cuckoo Cycle for that. So, you can find all these four projects listed at the bottom of my Cuckoo Cycle project page. 

Mike: Okay, cool, that sounds good. 

John:  And if I hear of anything else that’s planning to use it, I’ll add it to that list. 

QS 42:26 Mike: Awesome. Okay, well, thanks for an awesome discussion. Did I miss anything that you think is important for somebody to hear about Cuckoo Cycle? 

John:  When you were mentioning at the start how Monero’s proof-of-work is also designed to be egalitarian, I thought I should mention that it’s not just striving for equality between CPUs and GPUs, but currently only I think recent x86 CPUs and GPUs because it does have this reliance on having efficient AES instructions. So, it is biased and it’s something I also tried to avoid in Cuckoo Cycle. So, I chose, for the basic hash primitive in Cuckoo Cycle, I chose something that is easy to implement on all CPUs, so there would not be any advantage of using x86 CPUs versus ARM or whatever is powering your phone. 

Mike: Oh, interesting.

John:  I think all CPUs should be able to efficiently compute the cryptographic operations. I don’t want to try to exploit one particular implementation on one particular CPU.

Mike: Yeah, that could be really important then, just even from a node perspective, if any type of hardware can do the verification really easily, and not have to worry about all of the complexities of proof-of-work and whether it has AES-NI instructions and stuff like that, then that I think has a lot of benefits just for the network in general. Cool, thanks for pointing that out. 

John:  Yeah, the hash function underlying Cuckoo Cycle, so the one that’s used to define the graph, I chose it to be as lightweight as possible, while still having some cryptographic strength. So, this is the SipHash function, in particular SipHash 24, with a total of 6 rounds. It’s quite lightweight to compute and it’s only providing a 64-bit output rather than the big cryptographic hash functions that have 256 or 512 bits of output. So, it’s quite easy to compute and I don’t need any more than 64 bits of output. In fact, in the recommended parameters, I only used 32 bits of output, but at least it’s ready to scale beyond that.

Mike: Yeah. Okay, thank you for talking with me and for getting into some nitty-gritty details of proof-of-work.

John:  Thanks for inviting me.

Mike: Yeah, I mean it’s not every podcast that I get to talk about such technical parts of a project, so I really enjoyed this and hopefully some of the listeners did too. If people wanted… You already talked about your personal webpage and the Cuckoo Cycle page, if people wanted to find out more about you or get in touch with you, are those the best places to do that? 

John:  Certainly. Just go to the project page and read up all the references there, you can read the blog entry I have and the whitepaper even if it’s somewhat out of date now. And yeah, people are welcome to contact me via email or even to raise issues on the GitHub page and I’d be happy to answer any further questions on the Monero-related forum or mailing list. 

Mike: Okay. I’ll be sure to let you know when I publish this podcast and maybe point you to where on reddit for instance I post the podcast in case anybody has any feedback like that. And then, I’ll also in the show notes make sure to include links to your page and to the project page, just so people can find those easily. And yeah, unless you’ve got anything else, do you want to just call it a day then? 

John:  Sure. Thank you very much for having me on the show, Mike. 

Mike: Yeah, thank you, John. Have a good day.

John:  Okay, bye, bye!


Discussion with John McAfee

Mike: And that was John Tromp. I hope you all found that interesting and maybe a little bit informative. Shifting gears now, let’s just jump right into the conversation I had with John McAfee. This interview wasn’t planned; I just kind of lucked into getting a 5-minute exclusive with him. Enjoy! 

Mike: Hey John, welcome to the show, how are you doing?  

John:  I’m great, thank you.

QS 46:57 Mike: Thanks so much for agreeing to talk with me on the spot. You’ve been out in the media talking about cryptocurrencies and Bitcoin a lot lately, and one thing it seems you keep mention is Monero. What about Monero has drawn your attention?

John:  Here’s one of the things that is unique about me. I am fundamentally a cyber-security expert. So, I spend more time on the dark web than anybody because if I don’t, how can I be a cyber-security expert if I do not know my enemy, what my enemy does, how he behaves, what tricks he has up his sleeves?

So, one of the things that I noticed over the past 8 months is that Bitcoin, which used to be the cryptocurrency of choice when buying illegal drugs, or hiring a hitman, or wanting to buy a monkey from Spain which is illegal to import, whatever it is, Bitcoin was the token of choice. Now, nobody uses Bitcoin. Everybody uses the Monero. However, in the past few weeks, some of those sites that took Monero are now taking Verge, V-E-R-G-E, which is like, strange…

Mike: Yeah, have you looked into Verge at all? 

John:  I have not, no, Sir. Have you?

Mike: So, all that Verge is, is Bitcoin with Tor built-in, and so there is no unlinkability or untraceability like there is in Monero, all it is is hiding your IP address.

John:  Really? See, I did not know that. 

Mike: So, it doesn’t sound like a terribly useful coin. 

John:  All I know is from what I see on the dark web, and what you see on the dark web, within a few months, sometimes a year, will appear in the open, real world, like Monero. And it will pass Bitcoin and then everybody is accepting Monero, I’m noticing that people are accepting Verge, that’s all I’m saying. I have no clue what realities are.

Mike: So, given that it’s really just Bitcoin with Tor built-in, do you think that sounds like a good idea? Because we already know that Bitcoin is very traceable. And what was it, the IRS came out the other day and they’ve said they’ve traced 50% of all transactions.

John:  Yes, yes. If that is in fact true, then yes, it’s not a good idea. However, people are using it. That’s all I’m saying. 

Mike: Okay, interesting.

John:  And if people are using it, then there is some value to it. 

QS 49:27 Mike: Now, do you think… Has Monero trapped itself in a corner where that’s what it’s good for? Or is Monero, because it is untraceable and because it’s something that provides privacy, is it something that can be just a general money because money itself is private, or at least should be? 

John:  Well, here’s the issue. I believe that everyone, everyone has secrets, you, me, your wife, your parents, your children, your friends, everybody has secrets. And what is the greatest secret that we try keep? Our financial status. We either exaggerate it, like we’re broke, but we pretend to be millionaires, or we minimize it, we are millionaires, but we pretend to be broke. There’s reasons for both of those opinions. 

So, if everybody has a financial secret, well, then, good Lord, everybody would want Monero. Please, I mean, if you can say, “If you use Monero, nobody can find the sender or the receiver of any transaction.”

Mike: Or the amount of the transaction too. 

John:  Or anything else, nothing about the transaction. So, then of course, people are gonna go: “Huh, I like that.” Why? Because it meets a human need. And that human need is “I have secrets which I would like to keep”. That’s it. Pure and simple. So, given that, of course Monero can be extrapolated into the world beyond the dark web where we all have secrets and we wish to keep them.

Mike: And I think just your personal wealth in of itself is a secret regardless of if nothing about it is secretive. 

John:  If you are smart, you do not want people knowing what you’re worth, okay? This is a fact of life.

Mike: Right. Because if somebody knows what you’re worth, then they might charge you more for whatever it is service that they’re giving you.

John:  Or the government may tax you more, okay. And the biggest secret we have is from our governments that want to take the money that we worked so hard for. Please, God, if I gave you a tool or some utility that says, “Okay, if you use this, the government will not know how much money you made.” I’ll go, “Well, thank God, finally!” So, this is the fact as I see it, and again, I could be wrong. But you know, I’m generally right, I hate to be arrogant.

In any case, I need to run; my wife is looking at me with daggers in her eyes.

Mike: Okay, well, you did great, thank you! Have a great rest of your day. 

John:  Thank you, Sir. Goodbye, Mike.

~~ { Closing Music } ~~