A new era of collaboration — introducing the new LucidLink

Introducing the new LucidLink

Learn moreabout the New Lucidlink
The blog
About LucidLink

How it all got started

April 2024, by George Dochev

4 mins

Back in 2012 I was working remotely from my home in France while most of my team was in two offices, our main office in Florida and the branch in Bulgaria. Additionally we had a couple of engineers around the globe working from their homes like myself.

We were building the next generation storage virtualization software at DataCore, called SANsymphony. The source code for the product was already quite large consisting of tens of thousands of files and building it required dedicated hardware. We had set up an automated build system located in Florida which would kick off a build after each commit into the source code repository. Each build was around 10GBs and we had more than 10 builds per day which all resulted in a large volume of data produced daily. All of our team members needed to have access to those builds for testing and debugging purposes. Additionally, the bulk of our test team was in Florida and often as part of their testing they generated dump files which had to be analyzed by developers in other countries. These files were in the gigabyte range as well and also accumulated at a fast rate.

The remote file sharing problem

The two main offices were connected with a VPN tunnel so the Bulgarian team could see the same file shares as the Florida team. Unfortunately due to the large distance the remote file shares proved excruciatingly slow and often hours would be wasted waiting for the next crash dump to get copied over before troubleshooting could begin. We had a guaranteed bandwidth network connection, which in theory would give us good transfer rates yet in practice we were getting only 1/20th of the available bandwidth. We started testing and we came to the realization that the primary reason for the slow access was the inefficient network file protocols. They worked well on the local network but performed very poorly over long distances. To alleviate some of the pain we wrote our own tool to synchronize the builds produced in Florida to a local server in Bulgaria so the team there would have local access. This scheme worked ok for the branch office in Bulgaria. The rest of us however who worked from home didn’t have the dedicated server or the network bandwidth to do full synchronization so we still had to resort to the painfully slow vpn/file share access. Add to that the fact that some of us were using Macs which had its own interoperability issues with Windows file shares and you get the picture. Every 3 to 4 weeks I would rotate between my home in France, our office in Bulgaria and the one in Florida. As I worked in all 3 locations I noticed how much less productive we were as we got further away from the main office.

The idea is born

This is when I thought, wouldn’t it be great if there was something like a distributed file system for the Internet that would span across all locations so the whole team can see one global namespace and access the files within this file space irrespective of where they were. It wouldn’t require VPN, it would be designed to work very efficiently over the Internet by utilizing the available network bandwidth and reducing the chattiness. I wouldn’t have to synchronize the whole file system, like the existing file syncing services, instead I would directly access the remote files just like on a network share.

So instead of copying several gigabyte crash dump file only to discard it when I am done, I would instantly open it in the debugger and start the analysis. Yet I’d still be able to selectively synchronize certain folders, so that our branch office could maintain replicas of the builds. I could also pin down a particular build that I am using more often so it would be available locally on my laptop even when offline.

This Internet file system would work equally well on all operating systems not just Windows and seamlessly switch between online (connected) and offline (disconnected) mode. It wouldn’t require complex configuration from IT and it would work on any desktop or mobile device. It would be fast and easy to use and integrated with the OS so that’s it’s practically invisible to the end-user. Wouldn’t that be awesome! And so the idea was born.

I was sure many distributed teams had similar needs although surprisingly there wasn’t anything like that on the market. I started doing research and quickly came to the realization that while it all sounded pretty good on paper, creating something that works reliably would be daunting. It was a classical distributed system with lots of moving parts running in a heterogeneous environment with devices constantly appearing and disappearing from the network. Hum… I was starting to see why no one had tackled this successfully. So we had a formidable problem that would bring significant value when solved. I was game.