Lispian Random meanderings on whatever catches my fancy

The Move and the Big Start

Returning to my recollections on Texar, we come to the latest installment on the aspects of being an entrepreneur at Texar.

An investment from VCs in the bank and visions of grandeur. That’s where we were in the Spring of 1999.

We needed office space and found 3500 sq. ft. of it in the west end of Ottawa. Nice space, nothing fancy, but nice nonetheless. There’s an old rule of thumb that says 120 sq. ft. per person is adequate, unless you’re using cube farms in which case you can crunch that down to 64 sq. ft. Not being a believer in overcrowding I stuck with the old belief of 120 sq. ft. of space per person, preferably with a door and a window. That meant we could cram in about 30 people in the space we’d rented, less in reality as the boardroom was to remain off limits.
As we were moving from my basement to the new digs all it required was getting phone and Internet service. That took some doing but soon enough it was done. We had our domain moved over, we had FreeBSD boxes up and running our mail and web services, and we were in business. It was cool! And we realized the real work was about to begin.

We ordered chairs (Aeron, of course) and desks (cheap, Business Depot jobs). I got a used one we found in the basement of the building we were occupying. The chairs cost us $1000 while the desks cost us $100. We had desks from the basement as well, so we equipped each office for approximately $1100 initially. Cheap, really.

We went out and bought inexpensive chairs for the boardroom but fortunately it came with a large table, which from the looks of it had the room built around it! Lucky us. Computers were the next order of business. I had been dealing with a small firm in Ottawa and Joel, the owner, agreed to build us up the equipment and order in the software we needed. It took a few days but soon enough machines were rolling into the offices. Not the fanciest machines, but ones that would do the job. Everyone got a new box and a 19″ monitor. Now for the coding.

But it wasn’t meant to be. Just as we’d set up the systems, installed compilers, libraries, etc., in came the VCs. They had other plans. They needed to talk, and it couldn’t wait.

So we went into the boardroom and sat. The lead VC, David, said that he didn’t want to see us coding. He wanted designs and requirements created. He didn’t want any code out of us until he saw reasonable plans both design and test. We needed to hire a QA Manager to complement our Chief Scientist. Tony was asked to join us as we figured out our next steps.

Our next steps turned out to be simple: what was left of April to June we were to create design and requirements documents. These were to be approved of by myself, as CEO/CTO/President and by Tony, our Chief Scientist. And, once we had a QA Manager aboard, all plans and designs would have to have corresponding test plans as well. A release schedule was to be drawn up against estimates. We were to have all of this done by June, in time for the next Board Meeting.

I’m used to doing designs, but have long believed that minimalist designs are better. It’s easier to change the documentation if there’s not much of it. Besides, you typically learn more by doing and talking to colleagues and friends in the industry than staring at Word. But, “He with the gold rules” as they say, so we did as we were told. Were I to go back in time I’d have done it differently, though. Producing way less documentation and focusing on crisp diagrams and short (1 – 2 page) documents explaining the key components. Enough for everyone to know what was needed and something everyone could read. Today I’d dump all that in a wiki, but back then they were too new to even know about.

Back to the reality of 1999.

Techies, design? Documentation? Hmmmm? Odd. OK, we thought, fine. Let’s get down to it. I tasked Tony with leading the charge and we began laying the ground work for the “Algonquin” as SecureRealms was called before we had a real name for it. The entire team spent days in the boardroom hunched over sheets of requirements arguing about the pros and cons of XML, custom protocols, X.500, X.400, LDAP, etc. Each discussion resulted in a resolution and design elements that were laid down and codified as Texar Law. It was 1999 and some things just didn’t exist, or weren’t mature enough to use. XML was one, we had a long argument back and forth. The following was the discussion concerning how the various components that made up SecureRealms would communiciate:

  1. Use a custom protocol
  2. Use LISP as the protocol
  3. Use XML as the protocol

Number 3 was abandoned as we had no idea how long, if ever, XML would take to be adopted. Furthermore, it would require a lot of work and slow the process down. I also feared the Board wouldn’t agree to us adopting yet another emerging technology. It was bad enough we had decided to build the entire engine in Java, but to also add XML to the mix seemed too much in 1999. Option number 2 was also dropped, although it had many supporters. Tony eloquently argued that moving s-expressions around would both be feasible and efficient. Others worried it would be cumbersome and slow. We were worried about performance. I liked Tony’s idea and believed everything should be s-expressions. The Web-based admin tools should spit out s-expressions to the backend engines. It was easy to parse and easy to control. But it would require additional code and there was the unknown performance issue. Sadly, it was dropped.

Finally we had option number 1. It wasn’t the best option but was what was used in early prototypes. It worked and we opted for a protocol based on AT&T’s Plan 9. As ours was tailored to security we called ours “Plan Nein”. It was a simple and elegant protocol. It had one major flaw: it was fixed.

Looking back we should have opted for s-expressions. They were close enough to XML we could have easily moved to XML at a later date. But I was overly cautious. Which was weird, because I wanted to do s-expressions. Tony wanted to do s-expressions. And yet, somehow, I talked myself out of it and, in the end, talked everyone else out of it as well. Hindsight would show it was a stupid decision not to go with s-expressions as they would have provided ample flexibility and better aligned with the core engine.

And it wasn’t that Plan Nein didn’t work. It did. It worked marvelously well. But it wasn’t as flexible as an XML or s-expression solution would have been. And, it turned out, we wouldn’t revisit the notion of s-expressions until Version 3 of SecureRealms was in design a few months before Texar ceased to be.

Again, looking back, the right decision would have been to build a few prototypes and then weigh the pros and cons. I’m now a strong believer in building small demonstrator programs that illustrate some of the capability we’d be after in a project. Code that can be tossed but that provides insights into what it is the team is trying to accomplish. I’m sure if we’d done that we’d have quickly realized that Tony was 100% right and the proper way to proceed was with s-expressions. I firmly believe that paper design is a complete waste in software if it isn’t augmented by demonstrators that can be manipulated so as to get a feel for where the software ultimately will go.

So June rolled around and we had the Board Meeting. It went well and the design was accepted. Tony and I had nagging doubts about not going with s-expressions, but we had chosen a direction and off we went. We laid out the plan to the employees and divied up the work. Within a few weeks we had a working Plan Nein stack. Within a few months we had a working version of SecureRealms. And we had our first problem. It was slow. Brutally slow.

Plan Nein, being light weight, was not the problem. It was the engine itself. Written in Java, we were breaking new ground. No one had, as of 1999, tried to write a full fledged application in Java that was, in effect, real time. For SecureRealms to work it had to return a response to a complex policy-based query in less than a second. In fact, the specification indicated that it had to do so in a fraction of a second, preferably in a hundredth of a second or less. In its first run it took hours. It was so slow we thought it had hung. It was that bad.
Optimization, which should always wait, began in earnest. Tony quickly discovered a series of optimizations which sped the program up enormously. Others found other optimizations and soon we had the policies being evaluated in minutes instead of hours. Still brutally slow, but a huge improvement. The VCs dropped by again.

They had been kept abreast of the situation and were worried. If the engine was too slow, the product would fail. No one was going to use a policy engine that took hours to make a decision. Who cares if it could be programmed with any policy you could imagine, if it took hours to make a decision it was as good as useless.

The VCs went right to Alberto, our QA expert. Alberto said we’d made significant progress. They nodded gravely and followed Alberto into the QA Lab. They wanted the truth, they told Tony and I to stay out as they discussed the situation with Alberto. When they emerged Tony and I were relieved to see both of the senior VCs smiling. They said that they were pleased with the progress and Alberto had instilled in them the confidence that we’d get the performance numbers. The speed issue was normal, we had a good team. We’d address it. We quickly endeavoured to ensure Alberto’s prediction would come true.

By the late Fall of 1999 we had the engine running at a reasonable speed. it could perform policy evaluations at a clip of a few dozen a second on a Pentium III. But one of our new hires figured a way to improve it even further. With the help of Tony they figured they could get thousands a second. They sequestered themselves away and emerged days later, battered but happy. They offered a CD to Alberto test out. For the first time, the entire company went to the lab to see what would happen.

Alberto loaded up the newest version of the engine onto the test machines and we all waited. He fired up the test suites and we watched. Alberto frowned, checked the screens and said something was wrong. Tony, grinning, indicated that there was nothing wrong and that Alberto should check to see if the results of the run matched prior runs. They did. But the program had finished in seconds. It worked. And it worked amazingly well. It flew!

And shortly thereafter someone else flew in to see SecureRealms run: the VCs. And they did. Vindication. They said that other VCs had said it would never work. Other experts claimed it couldn’t work as advertised. But it did. In fact, in 2004 experts from various security firms told me they still didn’t believe it worked that fast on the hardware available in 1999. Except that they’d seen it run that fast. And, damn, how did we do it? We obviously had something of value. I was to learn it was especially true after others, including folks from various banks and MITRE told me that they’d spent millions and never gotten such an engine to work efficiently. And ours, written in Java, ran on anything that Java ran on. We truly thought we’d hit a home run. We were on the way to the big leagues.

So the VCs watched the demos and noticed that it ran faster than they could have hoped. The worst technical hurdle had been overcome. But more remained. We needed agents and admin consoles. We needed manuals and training material. And most of all, we needed paying customers. The company was about to change shape again in a number of ways. But the one way it most needed to change was with regards to paying customers. And we were to learn over the next three years that timing is everything. And Texar had been founded at probably the worst possible moment as three successive events would suck the life out of the high tech community and eventually suck the very life out of Texar.

The first event was well known. The year 2000 was approaching and everyone was dreading Y2K. Trying to sell a security product to a world fixated on Y2K was harder than any of us, including our investors, could ever have imagined. Everyone was more worried there would be no tomorrow than worry about protecting tomorrow’s assets.

And so we were in for a hell of a ride in 2000, more so than anyone could imagine and nothing like what the Y2K doomsdayers were predicting. We were about to find out if the world was ready for on-the-fly, real-time, rule-based, true policy-based security. Were they ready for a revolution in computer security?

Comments are closed.

May 2012
« Mar   Jun »