Interview with John Bridges (PCPaint, GRASP, GL Pro)

John Bridges is a software engineer that co-developed PCPaint, which in 1984 was one of the earliest digital painting programs and helped spur the use of mice within the IBM PC market. He supported the growing number of PC video developers by freely releasing the VGAKIT and TGA utilities, tools that helped developers understand and compensate for the many hardware and software quirks of the era. He was also on the forefront of presentation software, developing GRASP, GL Pro, and AfterGrasp, which supported the commercial development of interactive games, screen savers, and demos of graphical algorithms.

In this interview, we seek to understand the engineering craft during the pioneering days of the 1980s and 1990s, how Bridges approached design and implementation choices, and the impact of a programmer outside academia and the big technology firms.

screenshot of PCPaint 2 w/ palettes of tools and settings around a blank content area

PCPaint Plus 2.0 (Credit archive.org)

Q: What led you to software development? What was your first computer?

I was excited about computers from an early age, but didn’t get to touch one until my father brought home a new Texas Instruments TI-59 programmable calculator with printer when I was 13. It was programmed in something like a simple floating point assembly language, and I really enjoyed exploring its limits. I had an interest in prime numbers then, and tried some experiments of calculating as many as possible in the memory by packing multiple integer values in each floating point variable. Over the next few years I had access to an IBM 1130 programming Fortran at Stuyvesant High School, and a Tektronix 4052 at an internship at NYU Medical Center. My first “real” computer I owned was an Apple II+ computer with floppy drive where I started programming in Basic and Assembly language. I pored over a bound printout I made of the infamous DOS 3.3 Disassembly to figure out how to speed up reading by 500% and wrote an article in Hardcore magazine entitled HyperDOS.

My first commercial software was the RAM Disk and Task Swapping/Hacking utilities included with the Know-Drive, made by Abacus Enterprises. I didn’t get to make a lot of money, but it was enormous fun! That’s eventually how I did the screen captures of the Apple MousePaint program that we used to mock up the first PCPaint screens.

My first real job programming was the summer after high school at CCM (Classroom Consortia Media), a small company founded by teachers to make educational software. I was recommended by my 8th grade math teacher whose son attended the same high school as me, and had told his father about my computer prowess.

At first it was on Monroe color computers where I mostly wrote a drawing program that became the early version of what would become SuperDraw. CCM then made a deal with IBM to produce educational science software as part of their efforts to get IBM PCs into schools. IBM provided a bunch of PCs, and I soon got an early IBM XT to take home so I could work from both home and the office. At CCM we decided to go with C, which I knew almost nothing about. My first tasks were to create all the tools we would need, including graphics routines, utilities to create image libraries, and a drawing program to create/edit images which eventually became a commercial product as “SuperDraw”.

Since I was already comfortable with assembly language it was easy for me to pick up x86 assembly on the PC. Because I knew nothing about C or the terminology of C programming (like pointers/structures/indirection/parameter passing), I often fell back on looking at a disassembly of the compiled C code to see what was really going on. That allowed me to directly translate my ideas of how the computer worked into higher level C constructs, and interface my assembly routines with the C calls.

It also meant a lot of my early C code tended to have a lot of global variables, which it took me many years to break that habit! Because of a fondness towards Forth, and the whole philosophy of breaking code into small functions, I developed a style of using tons of tiny little functions. Whenever I started to see code repeating I’d break it out into a generalized function, and nest this sort of process. It often led to major refactoring, but I found it easier to try to isolate logic as much as possible.

As for what Compiler/Assembler, we initially used the Computer Innovations C compiler for DOS, then eventually switched to Manx Aztec C, then quite a few years after that to Microsoft C++ when Manx faded away. For assembly I used Microsoft MASM until I moved to inline assembly as soon as that was available. When we were porting our educational software to Apple II computers we also used Aztec 6502 C compiler. The relationship with Manx developed to the point that they were selling SuperDraw for us for a while.

Q: One of your earliest commercial projects (c. 1983) was a graphics library for Classroom Consortia Media (CCM) who was working with IBM to develop educational software. What challenges did you face? How did IBM contribute?

In the early days I didn’t deal with IBM. It was closed door meetings and dog/pony shows.

Our company had already shown early versions of the educational software we had made on Monroe color computers, so IBM wanted that on PCs so they could sell IBM PCs to schools. At some point I was in a meeting at IBM discussing what our ideal machine would be for educational software. I pushed hard for 1 byte per pixel so we could get a lot more colors, and much simpler graphics programming. 320x200 was a practical limit at 1 byte per pixel because the CPU could only address 64k at a time.

Months after that meeting we got early access to a prototype PCjr, and although the graphics were a WONDERFUL step up, the rest of it was quite disappointing. It was like they had intentionally crippled the machine, and it was a flop. CCM had rapidly expanded, moving to two floors of a larger building across the street, and hiring more management, office staff, and developers. The PCjr flop further increased financial problems. IBM couldn’t entirely fund CCM. We were expecting software sales, but the PC and PCjr never became popular for schools, particularly elementary and middle schools which was our focus. CCM went through multiple layoffs and reorganizations until they moved to a much smaller office with only a handful of people in 1987.

At that time I made an offer to IBM to show off their replacement for the PCjr, the IBM Model 30. I had a proposal to do motion video, one quarter screen, unheard of then on a slow 8mhz PC. Unfortunately we couldn’t use IBM hardware to make this demo because IBM had tried to push the entire industry away from the standard PC platform to the new PS2 platform using a new interface slot. The problem was a video capture card we needed to produce video was not available that would work on any IBM machines. So IBM paid for me to purchase a Compaq 386/20 PC, and an AT&T Targa16 video capture card. They also paid for an industrial controlled laserdisc player, and having the demo video pressed onto a laserdisc. It had to be laserdisc because the Targa16 could only capture still frames, and laserdisc was the only way to get single step random access to high quality video frames.

The biggest challenges of this project were basically inventing all the software/algorithms from scratch. This was before internet, and easy research, so I basically made it all up. When I needed to scale hi color images, when scaling down I would combine pixels with percentages of other pixels, or when scaling up, I’d use percentages of nearby pixels to make new pixels. That became the TGASCALE utility. It wasn’t ideal for scaling up, but worked nicely for scaling down. For reducing 15bit hi color down to palette images with 256 or fewer colors, I started with simple popularity, building a table of all colors used, and sorting by how often. Then picking the most popular for the palette. This worked fine for large smooth gradients, but left details with no decent color choices. I started combining colors, so very similar colors would be replaced with the most popular until enough color details could be rendered. I mostly came up with optimal default values for combine level vs total colors by using a set of test images and my eyes. Like I had some shots of Johnny Carson on tonight show which was reminiscent of the educational shots from IBM. All that color reduction code became the TGAPIC utility.

To reduce the huge amount of data required for video, I created some differential image compression code, where I stored only the pixels that changed between frames, using run length compression for the empty parts. For lots of complex parts without long skip runs, I used a bitmask to define which pixels changed. Some of this code eventually became the DFF animation format used in GRASP. Here is a letter I wrote to Paul Mace Software about the DFF format.

The playback software also ran into issues with the slow speed of palette and video memory writes, so I couldn’t change palettes without extreme color flash. My workaround was to only use 120 colors for the video, reserving 8 colors for text and surround. Then I’d alternate using the top 128 vs the bottom 128 colors so I’d never change the palette for any visible colors. All this got used later for the National Geographic Mammals CD-ROM.

Q: In the spring of 1984, you and Doug Wolfram develop PCPaint which would be distributed by Mouse Systems with their mice. At the time, you lived in New York and Doug in California. How did you handle geographically distributed development?

Doug ran an Apple II based BBS which I was a member of at the time. It did not support files, only messages. Unfortunately we have no kind of archive of that BBS. During development we mostly communicated by telephone.

This is before we had internet access, or Compuserve. So we generally sent files back and forth via modem. I used a Hayes 1200 baud Smartmodem with terminal programs like Qmodem.

It was too painful moving files between Apple and IBM at that time, so I generally did PCPaint work and communications on the PC. Doug does not recall having a hard drive on his PC during PCPaint 1.0, so I mostly sent him final builds since compiling on floppies was grim. We had a friend do the fonts, and some of the graphics.

PCPaint 1.5 was done the same way with me working on Staten Island at my parents house, and Doug in his home on Serang Place in Costa Mesa California.

In fall of 1985 PCPaint 2.0 was running late… to speed up final development, the agreement with Mouse Systems was I would go work onsite at Mouse Systems if they covered costs. I ended up working at the Mouse Systems offices every day for over a month, taking long drives at night in a light blue rental Mercedes convertible. Great car! Sadly I had to give back the Mercedes because they had it reserved for James Belushi, and I had long overrun the original rental agreement. I ended up with an awful Nissan nx200 that overheated. I drove it down to Doug’s house in Southern California when PCPaint 2.0 shipped.

That development became such a grind that PCPaint 2.0 was probably the most bug free commercial software I’ve ever worked on. Such a grind that one of the testers died shortly after completion!

A year later, I bought Doug’s house on Serang Place, and he moved half an hour further south. I wasted a hunk of the PCPaint money on a new car. Was tempted by a Mercedes convertible, but interior space was too limited. I went with a white Porsche 928S4, which I foolishly crashed driving too fast in an empty damp part of Pacific Coast Highway after a few months.

Glad I wasn’t hurt.

Once I had moved into Costa Mesa, I started a GRASP support BBS running on PCBoard, and used that for sending files. That was all dialup modems. That BBS eventually ran on the Compaq 386/20 IBM paid a fortune for! I didn’t get broadband of any kind until the late 1990s when PacificBell offered IDSL, which was 128kbps for $110 a month. I stayed in that house over 24 years, including after I met my wife on Match.com. This was before smartphones or “swipe right”. It was more like old fashioned matchmaking by correspondence mail. We left shortly after our first daughter was born for rural New England, and the people who purchased the house tore it down except one tiny corner.

Most upsetting, I accidentally left behind my last Apple II computer with all the accessories. It was in the corner of the guest closet, and we somehow missed it as we frantically filled and closed boxes as the movers carried them away.

On the same evening after the movers left we boxed up our last clothes and medical records, shipped them via the post office to ourselves. Except my wife was exhausted, and misunderstood where we were shipping the box. She shipped it to ourselves at our defunct California address where it sadly sat outside for days until some random person took it.

My advice for moving out of a house after 24 years is rent a dumpster, and be brutal. We didn’t, and I still have untouched old PCs we paid a fortune to move.

Q: You used the C86 C compiler from Computer Innovations for PCPaint. Why did you choose C? How did you find and select your programming tools in the 1980s?

Choosing C was a joint decision at Classroom Consortia Media. We had a manager who had experience with Bell Labs and recommended C. I remember preferring Basic at first because our developers were all familiar with Basic, but when we realized the poor state of Basic on IBM PCs we went with C. I was already very familiar with assembly language on several different CPUs, so 8086 Assembly was easy for me, and since C is really like a higher level 16bit assembly language I picked it up quickly.

Why Computer Innovations? In 1983, it was the only reliable C compiler we found! I actually ran into a couple bugs in the compiler, but the main limit was lack of inline assembly, and support for other CPUs. We eventually needed to compile for 6502 to produce Apple II educational software. With Manx Aztec C, I could port over my graphics library and all the command line tools. We had so many problems doing development using Apple II computers with unreliable harddrives and quirky software, that we eventually made Apple II the only target. So we’d do all the development and graphics on PCs, and then just do final visual testing on Apple II machines. I remember creating a special scaled mode in SuperDraw to simulate the strange aspect ratio and resolution of Apple graphics.

I don’t have exact records of which compilers were used for which versions of PCPaint. I know I switched to Microsoft C 6.0 exclusively around 1990 since that’s what was used for all the example code I made public like VGAKIT. As for other programming tools: In the early days I used WordStar for code editing under CP/M, even using a Z80 card on my Apple II, then the DOS version. I stuck with the old version, avoiding the newer WordStar versions because I was comfortable with the keyboard layout and program design. Then switched to Brief for editing around 1986. I used a modified keyboard layout simulating the Wordstar keys. For graphics editing, measuring/scaling, I mostly used my own tools including SuperDraw.

Q: To develop PCPaint, you and Doug would need to understand how to interface with a dozen or so video modes, read data from the mouse, and implement graphical primitives such as curves, flood fills, and magnification of images. How did you learn or develop these skills and stay current in the industry? For instance, I count 17 PC compatible video hardware standards introduced between 1981 and 1991.

The first PCPaint was CGA graphics only, no unusual video modes, and the modes were available using BIOS calls to switch into them.

I don’t believe Doug had much to do with PCPaint after the first couple versions except delivering code to Mouse Systems, and handling the contracts. Doug was busy with his new company “GRAFX Group” which used GRASP to produce multimedia projects since 1984.

DOS based graphics, even including SVGA graphics in high resolution in the early 1990s, used NO hardware assist. All the graphics was writing to a frame buffer.

I believe the first hardware assist SVGA was from S3 in 1991, and was only for Windows acceleration.

To figure out the layout of the frame buffer, and how to program the chipset into different modes I made contact with the largest chip makers and built up a ongoing relationship with companies like Tseng Labs, Hercules, Chips & Tech, ATI, Genoa, NCR, Compaq, Cirrus, Everex, Trident and others. I’d meet them at Comdex, make sure they sent me test hardware and documentation.

This was only for a few years because they finally came together as VESA to create a standardized Video BIOS Extensions (VBE) to detect, switch video modes, and provide details on frame buffer. All this code was made public in VGAKIT, so the chip makers were generally enthusiastic about getting my support.

For the other video modes in PCPaint, I just created sets of icons/graphics/borders for each major resolution, like 640x480, 800x600, and so on. For mouse, I used the standard MouseSystems and Microsoft DOS mouse drivers to talk to the mouse. For tablets I had to talk to the tablets directly via a serial port because there was no standard at that time for tablet drivers.

As for all the graphics primitives, I wrote them myself pretty much making it up as I went along, and sometimes my work was not ideal. Like my early circle/ellipse code was kind of crude with rough edges. For animation in video modes where pixels were not byte aligned, like 2 color, 4 color, and 16 color, I would create routines to pre-shift the graphics into each byte alignment. That way I only had to mask the edges, and then copy bytes without doing all the shifting real time (very slow).

I remember struggling with the performance of my area fill. I made up several algorithms until I had one fast enough I was satisfied. My earliest was a simple recursion drawing points, it was slow and it could hang in complex areas since it didn’t know which areas had been filled yet. I tried a list of scan lines, but that was slow on complex areas, so my final code had to use a bitmap memory of what areas had been filled to avoid bugs with complex patterns that included the same colors already in an area. The final code ended up with a dialog box to pick different kinds of fill including gradients.

For scaling/zoom, it was a simple pixel duplication, so I just wrote some special code to handle it quickly.

The text editing for proportionally spaced text was surprisingly complicated in later versions of PCPaint. And we always had problems with italic fonts and proportional spacing since you couldn’t use the scanned pixel width. I had to eventually change the font format to include width tables to fix that.

Keep in mind I failed trig in high school when I hated memorizing formulas and slacked off. Second time I took the class, a different teacher emphasized how everything was derived, and I easily got an A.

As long as I could figure out the roots of something so that I understood how it was arrived at, I could take a crack at solving anything on my own.

Q: Can you provide an example of an algorithm you developed?

This algorithm I needed was for an IMGWARP utility that eventually became a command in GRASP/GLPRO. I don’t have the original source for either (still looking), but I do remember the story: It basically took an image, and mapped it to any four sided polygon.

Picture of peppers stretched and skewed

Screenshot of Peppers Image Warped (Orig. 4.2.07 in USC-SIPI)

Ideally, it would be all integer, no complex math since this would sometimes need to be done in real time in a running app on a slow computer running DOS. I wrote a few test algorithms such as simple ratios, but they looked strange and badly distorted, and fell apart in complex cases. I consulted with my older brother Tom who is a math professor, and he fiddled around with it, but the results didn’t look right, and the math was too complex. I tried some math solving software by writing it all out long form, and asking it to simplify it. Didn’t get anywhere.

Finally asked a smart close friend of mine to do it as a project where I’d pay him. I don’t remember the amount of money, and sadly he passed a few years ago. He puzzled over it for a few weeks until he was very excited to show me his very simple answer. At one of our regular Indian restaurant meals he drew it out on a sheet of paper. He extended each of the two sides until they met, so it became a simple triangle, then did simple ratio based on that area. Special cased for parallel sides. When I described it to my brother he was a bit shocked at how simple it really was.

quadrilateral x1,y1 to x4,y4, containing point xx,yy and triangle extending from x1,y1 x2,y2 to unnamed third point enclosing xx,yy

Recreation of triangle diagram

Editor’s note: although the original code is lost, Bridges believes the code below is very similar to the algorithm developed in the mid-80s.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
#include <math.h>  
#include <stdio.h>
#include <stdlib.h>

typedef struct { int x; int y; } point;

int imgWidth = 1000;
int imgHeight = 1000;

   // Define quad corners (screen space)
point quad[4] = {
    { 200, 100 },  // Top-left
    { 600, 150 },  // Top-right
    { 550, 500 },  // Bottom-right
    { 180, 450 }   // Bottom-left
};

int TriangleArea(point a, point b, point c) {
    return abs((a.x*(b.y-c.y) + b.x*(c.y-a.y) + c.x*(a.y-b.y)));
}

int PointInTriangle(point p, point a, point b, point c) {
    int areaTotal = TriangleArea(a, b, c);
    int area1 = TriangleArea(p, b, c);
    int area2 = TriangleArea(a, p, c);
    int area3 = TriangleArea(a, b, p);
    return abs(areaTotal - (area1 + area2 + area3)) == 0;
}

void SinglePixel(int x, int y, int *rx, int *ry)  {
    int tx;
    int ty;

    point p = { (double)x, (double)y };
    // First triangle: quad[0], quad[1], quad[2]
    if (PointInTriangle(p, quad[0], quad[1], quad[2])) {

        int areaABC = TriangleArea(quad[0], quad[1], quad[2]);
        int areaPBC = TriangleArea(p, quad[1], quad[2]);
        int areaPCA = TriangleArea(p, quad[2], quad[0]);
        int areaPAB = areaABC - (areaPBC + areaPCA);

        *rx = ((areaPCA + areaPAB) * (imgWidth - 1)) / areaABC;
        *ry = (areaPAB * (imgHeight - 1)) / areaABC;

    }
    // Second triangle: quad[0], quad[2], quad[3]
    else {
        int areaABC = TriangleArea(quad[0], quad[2], quad[3]);
        int areaPBC = TriangleArea(p, quad[2], quad[3]);
        int areaPCA = TriangleArea(p, quad[3], quad[0]);
        int areaPAB = areaABC - (areaPBC + areaPCA);

        *rx = (areaPCA * (imgWidth - 1)) / areaABC;
        *ry = ((areaPCA + areaPAB) * (imgHeight - 1)) / areaABC;
    }
}

int main() {

    // Point inside quad
    int xx = 200;
    int yy = 250;

    int rx, ry;

    SinglePixel(xx, yy, &rx, &ry);
    printf("Mapped point GOOD: (%d, %d)\n", rx, ry);
    return 0;
}

Sadly I don’t use any of that code anymore. Instead I built a much faster set of code in 2003 that recursively divided each successive area into quarters until one pixel in size. I modified that in late 2004 to anti-alias by oversampling, which looked much nicer. It looks like it was based on an algorithm from someone else I copied since it has comments that are not my style (formatted and don’t match the actual code), but I can’t find the original using a web search on any of these comments.

I did find my old polygon fill with line matching code. That’s the original code from PCPaint, since it had to handle complex many segment polygons with overlap/crossovers. I was able to use stdlib qsort instead of my ancient “myqsort”.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
#include "raylib.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include "resource_dir.h"

typedef struct { int x; int y; } point;

const int screenWidth = 800;
const int screenHeight = 600;

static Image img;
static Color *pixels;
static int imgWidth;
static int imgHeight;

	// Define quad corners (screen space)
point quad[4] = {
	{ 200, 100 },  // Top-left
	{ 600, 150 },  // Top-right
	{ 550, 500 },  // Bottom-right
	{ 180, 450 }   // Bottom-left
};


// Helper: Calculate area of triangle
int TriangleArea(point a, point b, point c) {
	return abs((a.x*(b.y-c.y) + b.x*(c.y-a.y) + c.x*(a.y-b.y)));
}

// Helper: Check if a point is inside a triangle
bool PointInTriangle(point p, point a, point b, point c) {
	int areaTotal = TriangleArea(a, b, c);
	int area1 = TriangleArea(p, b, c);
	int area2 = TriangleArea(a, p, c);
	int area3 = TriangleArea(a, b, p);
	return abs(areaTotal - (area1 + area2 + area3)) == 0;
}

void SinglePixel(int x, int y)  {
	int tx;
	int ty;

	point p = { (double)x, (double)y };
	// First triangle: quad[0], quad[1], quad[2]
	if (PointInTriangle(p, quad[0], quad[1], quad[2])) {

		int areaABC = TriangleArea(quad[0], quad[1], quad[2]);
		int areaPBC = TriangleArea(p, quad[1], quad[2]);
		int areaPCA = TriangleArea(p, quad[2], quad[0]);
		int areaPAB = areaABC - (areaPBC + areaPCA);

		tx = ((areaPCA + areaPAB) * (imgWidth - 1)) / areaABC;
		ty = (areaPAB * (imgHeight - 1)) / areaABC;

	}
	// Second triangle: quad[0], quad[2], quad[3]
	else {
		int areaABC = TriangleArea(quad[0], quad[2], quad[3]);
		int areaPBC = TriangleArea(p, quad[2], quad[3]);
		int areaPCA = TriangleArea(p, quad[3], quad[0]);
		int areaPAB = areaABC - (areaPBC + areaPCA);

		tx = (areaPCA * (imgWidth - 1)) / areaABC;
		ty = ((areaPCA + areaPAB) * (imgHeight - 1)) / areaABC;
	}
	
	if(tx>=0 && tx<imgWidth && ty>=0 && ty<imgHeight) {
		Color color = pixels[ty * imgWidth + tx];
		DrawPixel(x, y, color);
	}
}

void fillrect(int x1, int y1, int x2, int y2) {
	int x;
	int y;

	if (x1>x2) {
			int xtmp = x1;
			x1 = x2;
			x2 = xtmp;
	}

	if (y1>y2) {
			int ytmp = y1;
			y1 = y2;
			y2 = ytmp;
	}

	for (y = y1; y<=y2; ++y) {
		for (x = x1; x<=x2; ++x) {
			SinglePixel(x, y);
		}
	}
}

void mtline(int *x1, int y1, int *x2, int y2, int matchy)
{
	int deltax, deltay;
	int dirx, diry;
	int acc;
	int cnt;
	int xx, yy;

	acc = 0;

	xx = *x1;
	yy = y1;
	deltax = *x2 - xx;
	if (deltax == 0)
		return;

	dirx = 1;
	if (deltax < 0)
	{
		deltax = -deltax;
		dirx = -1;
	}

	deltay = y2 - yy;
	diry = 1;
	if (deltay < 0)
	{
		deltay = -deltay;
		diry = -1;
	}

	if (deltay < deltax)
	{
		cnt = deltax;
		while (yy != matchy)
		{
			while (--cnt >= 0)
			{
				xx += dirx;
				acc += deltay;
				if (deltax <= acc)
					break;
			}
			acc -= deltax;
			yy += diry;
		}

		*x1 = xx;
		while (--cnt >= 0)
		{
			xx += dirx;
			acc += deltay;
			if (deltax <= acc)
				break;
		}
		xx -= dirx;
		*x2 = xx;
		return;
	}


	while (yy != matchy)
	{
		yy += diry;
		acc += deltax;
		if (acc >= deltay)
		{
			acc -= deltay;
			xx += dirx;
		}
	}
	*x1 = xx;
	*x2 = xx;
	return;
}

#define isgn(i) ((i) < 0 ? -1 : 1)


static bool cmpit(int *a, int *b)
{
    return (*a < *b ? -1 : *a > *b ? 1 : 0);
}

/*************************************************
fillpoly(xya,numxy,func)

input   : array of coordinates, number of elements in array, fillrect function
output  : nothing
utility : fills polygon defined by points in array
      xya. number of points in array is numxy.
      calls func to fill area.
**************************************************/
void fillpoly(point *xya, unsigned int numxy, void (*func) (int x1, int y1, int x2, int y2))
{
    int i;
    unsigned int cnt;
    int hr, hf;
    int nx1, nx2;
    int xa[256];
    int *xpnt;
    int df2, df1, df0;
    int x1, y1, x2, y2, zy;
    int tpy, bty;
    unsigned int ni;
    int box;
    int eqx, eqy;

    tpy = 0;
    bty = 32767*32767;

    box = (numxy == 4);

/* find minimum and maximum y values */
    for (i = 0; i < (int) numxy; ++i)
    {

        zy = xya[i].y;
        if (zy < bty)
            bty = zy;
        if (zy > tpy)
            tpy = zy;

        if (box)
        {
            ni = i + 1;
            if (ni >= numxy)
                ni = 0;

            eqx = (xya[i].x == xya[ni].x);
            eqy = (xya[i].y == xya[ni].y);
            if (!((eqx && !eqy) || (!eqx && eqy)))
                box = false;
        }
    }


    if (tpy == bty)     /* all points on one horizontal */
    {
        for (i = 1; i < (int) numxy; ++i)
            (*func) (xya[i - 1].x, tpy, xya[i].x, tpy);
        return;
    }

    if (box)
    {
        (*func) (xya[0].x, xya[0].y, xya[2].x, xya[2].y);
        return;
    }

    for (zy = bty; zy <= tpy; zy++)
    {
        xpnt = xa;
        x1 = xya[numxy - 1].x;
        y1 = xya[numxy - 1].y;
        df1 = isgn(zy - y1);
        df0 = isgn(zy - xya[0].y);
        if (!df0 && !df1)   /* first point in middle of horizontal */
        {
            i = numxy - 1;
            while (i >= 0)
            {
                --i;
                hf = isgn(zy - xya[i].y);
                if (hf)
                    break;
            }
            hr = 1;
            *xpnt++ = xya[++i].x;
        }
        else
            hr = 0;

        for (i = 0; i < (int) numxy; i++)
        {
            x2 = x1;
            y2 = y1;
            df2 = df1;
            x1 = xya[i].x;
            y1 = xya[i].y;
            df1 = df0;

            ni = i + 1;
            if (ni >= numxy)
                ni = 0;

            df0 = isgn(zy - xya[ni].y);

            if (!df1)   /* this vertex on zy */
            {
                if (df2)    /* unique point or beginning of horizontal */
                {
                    *xpnt++ = x1;
                    if (!df0)
                    {
                        hf = df2;
                        hr = 1;
                    }
                    else if (df2 == df0)
                        *xpnt++ = x1;
                }
            }
            else
            {
                if (hr)
                {
                    hr = 0;
                    if (df1 == hf)
                        *xpnt++ = x2;
                }
                else if ((df1 != df2) && df2)
                {
                    nx1 = x1;
                    nx2 = x2;
                    mtline(&nx2, y2, &nx1, y1, zy);
                    if (nx1 != nx2)
                        (*func) (nx1, zy, nx2, zy);
                    *xpnt++ = nx1;
                }
            }
        }
        cnt = xpnt - xa;
        if (cnt & 1)
        {
            *xpnt++ = x1;
            cnt++;
        }
        qsort(xa, cnt, sizeof(xa[0]), (int (*)(const void *, const void *)) cmpit);
        xpnt = xa;
        for (i = 0; i < (int) cnt; i += 2)
        {
            (*func) (xa[i], zy, xa[i + 1], zy);
        }
    }
}

int main() {

	InitWindow(screenWidth, screenHeight, "Manual Quad Texture Mapping");

	SearchAndSetResourceDir("resources");

	img = LoadImage("./4.2.07.png");
	pixels = LoadImageColors(img);
	imgWidth = img.width;
	imgHeight = img.height;

	SetTargetFPS(6000);
	
	int frames = 120;

	while (!WindowShouldClose() && --frames>0) {
		BeginDrawing();
		ClearBackground(RAYWHITE);

		fillpoly(quad, 4, fillrect);

		// Optional: Draw the quad outline
		for (int i = 0; i < 4; i++) {
			DrawLine(quad[i].x, quad[i].y, quad[(i + 1) % 4].x, quad[(i + 1) % 4].y, BLACK);
		}

		EndDrawing();
		quad[0].x -= 2;
		quad[2].x += 2;
	}

	// Cleanup
	UnloadImageColors(pixels);
	UnloadImage(img);
	CloseWindow();
	return 0;
}

[…] I thought I’d mention why the code looks so complex for what you’d think wasn’t a particularly complex problem. I had numerous issues with the outline matching the fill with no pixel gaps. I remember Mouse Systems being insistent that there be no gaps or holes. In the end, the only foolproof solution was to duplicate the draw line code. I made a version of my line draw that just gave the starting and ending X coordinates that overlapped at a specific Y location so it would match a drawn line exactly with no gaps. That’s the mtline function (void mtline(int *x1, int y1, int *x2, int y2, int matchy) That’s also why my line draw always starts drawing pixels from x1,y1 instead of swapping endpoints to simplify the code. That’s also the reason why this warpimage demo won’t have perfect matching edges because the drawn quad afterward is using the raylib drawline instead of my own draw line code, so it’s not absolutely identical.

The other reason the polygon fill is so complex is that it handles extreme cases like this with multiple crossings and confusing inside vs outside.

Highly convex and overlapping polygon filled with pattern

Complex polygon fill example in PCPaint 3.1 (source John Bridges)

Q: How was the experience working with early SVGA cards?

There were HUGE performance differences between different brands!

I wrote a benchmark VIDSPEED in 1987, which is still found online For instance the IBM Model 30 8086 PC used for the early Video project was only around 680 bytes per millisecond in all video modes. That means it took around 1/10th of a second to clear the entire 320x200 256 color screen, doing nothing else but writing data. That’s why I had to do the video 1/4 screen for the IBM Demo.

Sadly, the IBM Model 70 PS/2 80386 PC was still only around 611 bytes per millisecond in 320x200x256 color mode. Pretty shockingly poor showing. While a Tseng ET4000 board on a 386 PC ran at over 5000 bytes per millisecond, over 7 times faster.

This made a HUGE difference in animation and game performance. IBM video writes were generally slow for years in my benchmarks. This person has a very nice table of results including some cards made after the leap from ISA to EISA, then PCI, then AGP, and then the now popular PCIe. Their top speed is a Tseng ET6100 PCI card that tests at 33436 bytes per millisecond. Since VIDSPEED is a DOS program, it only tests 16-bit writes. I’m sure it would test faster on PCI cards using 32-bit or 64-bit writes. https://thandor.net/benchmark/73

This person has a 386 running with recent benchmarks. I’m surprised they found working hardware since many my SVGA sample cards eventually failed. https://www.os2museum.com/wp/more-isa-vga-benchmarks/

I forgot the VESA local bus, which was an extension of ISA from 1992 which came before PCI. It was used on fast 486 PCs before Pentium was affordable, so fairly common for a few years even after PCI started to show up.

https://en.wikipedia.org/wiki/VESA_Local_Bus

Although I have a few VLB few cards, I don’t have any working 486 machines except an IBM 701C laptop. In fact all my machines from late 1990s through early 2000s were wiped out by the Capacitor Plague, very upsetting at the time.

Q: PCPaint’s native image format was PIC or Pictor PC Paint. The Encyclopedia of Graphics File Formats portrays version 1.0 of PIC as a small wrapper around BSAVE format, or a rather literal translation of the framebuffer to an array of bytes. However, later versions add additional metadata and run-length encoding compression. How did you evolve the file format? What drove the changes and your design choices? Was there any formal interchange of ideas or lessons learned between the creators of file formats at the time (e.g. PCX, GIF)?

PCPaint saved in BSAVE because that was literally the ONLY standard for images on PCs at that time. Yes it sucked, but particularly in BASIC that was the “standard”.

One of the reasons BSAVE became useless was EGA graphics where it was no longer a simple buffer, but rather bank switch for planes. You couldn’t BSAVE a EGA image!

GIF was created in-house at CompuServe in 1987, and didn’t involve outside developers until 1988. TGA was only for high color images, had no compression, and was niche at that time, didn’t really come into use until 1986. TIFF came out in 1986, and frankly was far too versatile, trying to be everything to everyone. I never supported more than a subset. I don’t know if anyone ever supported all of TIFF. PCX was initially tied to PCPaintBrush, a competitor licensed to Microsoft Mice. PCPaint did eventually support PCX files in PCPaint 3.1. I had no contact with the Zsoft developers.

There was better compression from ARC than PKARC in 1985/1986, but it wasn’t easily integrated into other programs. Internally PCPaint 1.0 already used the run-length compressed images, all the artwork is in the newer format with size/offset information, but Mouse Systems didn’t want to save them, for good reason, what would you use them with? You could only load them back into PCPaint. Offering other formats to save would have required another dialogbox and complexity.

The OVR file included with PCPaint 1.0 is an image library with a 2 byte count of header size, followed by 16 bytes for each image. 4 bytes for the image file offset, and then 12 bytes for the null terminated filename without extension. So making a RLE compressed PIC format was by far the easiest option at that time — the code was already in there, it was just changing the save to call a different function.

As for evolving PIC format, I had to add more palette types, and then highcolor, so the header changed slightly. I never added more compression types since GIF/PNG/JPEG all filled those rolls better. It’s funny that GIF is mostly known for animation now since I didn’t support any of the animation features until decades later! PICEM, the free images viewer I gave out, didn’t support GIF animation at all. I mostly just used it as a better compressing 256 color format.

Q: Starting in 1984 and continuing over a series of decades, you developed a series of “presentation software” programs: GRASP, GLPro, and AfterGrasp. These tools have created games (example 1 and example 2), demos, screensavers, and commercial animations. What has kept you interested in this domain?

The actual use of these tools changed a great deal over time. From multimedia hobby, to presentations, kiosks, browser helper, and as a server side scripting language.

GRASP was initially intended for hobbyists as a very simple language to play around with graphics. It was used for multi-media projects and to a limited degree for early presentations, like an EXE distributed as an animated advertisement for a business. It was also used for information kiosks and this is where the early support for touchscreens and sound in GRASP were used. GLPro was the same thing as GRASP except it also supported Windows, and was intended to support Linux and Mac.

We had hired a Mac specialist to do the MacOS version, but due to conflicts over what the goal was, we had to fire them, which led to some very bad feelings. He wanted to start fresh with a completely new object oriented syntax and then backport that to Windows. My only goal was to get the existing code running on Mac. He was working in the Gmedia offices in England, and I was in California, so that made it all the more difficult to resolve these conflicts. I think he had been misled by those who interviewed and hired him, and was very upset he had left a reliable position to work at Gmedia on a project he thought he’d control.

It finally came down to, will you do what I need done or do we have to say goodbye. He left. I ended up doing the first MacOS version myself (I had to learn MacOS programming as part of that process), and I had something working in a few months that was in early testing and expansion with a new Mac developer when the company closed.

Gmedia GLPro was only around for a few years, but some of the largest short leaps were made then, a rewriting of the command structure, experiments with a completely different expression syntax, and adapting to other platforms. Gmedia had to close mostly because internal legal fighting, and huge legal bills in spring 2001. The assets were sold to a customer of GLPro Alabama who eventually went their own way when we couldn’t come to an agreement.

Several commercial customers of GLpro felt abandoned, and I was in talks with a some of them to try and help. I wrote AfterGRASP for Digi-products based in England, one of those users of GLPro. I felt partly responsible for this upheaval, and was excited to do my first complete rewrite to help some of the GLPro community. Because of legal issues with GLPro, AfterGRASP had to be started from scratch, and this allowed me to completely redesign how it worked internally while maintaining enough backwards compatibility to allow Digi-products to use it for their existing projects.

Because I started from scratch, and was doing a new internal design it took years until most of GLpro was replicated, some features were never supported. It had a compiler AGCOMP which produced a Forth like language which was tokenized into AGC binary files. The actual interpreter worked quite differently supporting limited threads, event driven features, and far faster execution.I started AfterGRASP in March of 2002, Windows only, and I had early drawing/images/text after a few months.

AfterGRASP was free to use, but we couldn’t really handle providing real support. The focus was almost entirely on Digi-products needs, and we never sold any commercial licenses. It may have started based on the ideas in GLPro, but became an internal platform for all kinds of different projects inside Digi-products. Eventually AfterGRASP was adapted to run as a BrowserHelperObject for InternetExplorer so that GL files could be run from a webpage. I used the BHO version of AfterGRASP to create our email and advertising PDF editing system for direct email and distribution. A lot of the server side image manipulation, Postscript/PDF manipulation was done in AfterGRASP.

As Internet Explorer started dying out, and support for BHO waned, I rewrote that entire system in JavaScript. I even adapted AGCOMP to allow a JavaScript style syntax so I could mix AfterGRASP and Javascript in the same project. The JavaScript based email system grew and I eventually adapted it to a new style of modular email that could be used for mobile email. The last builds of AfterGRASP were from just before COVID19 started, late 2019, and the email editing system slowly declined as direct email became less of a viable business.

Here are the notes I wrote back in 2002 on where AfterGRASP came from:

Where did AfterGRASP come from?

Aside from John Bridges (me!), Digi-products Ltd., and the small group of Alpha testers who have been in the loop since the end of July 2002, the list of people who even know there is an AfterGRASP project is tiny.

The list of those who knew about AfterGRASP did not include anyone involved in Gmedia or IMS or Paul Mace Software. In particular, this project has nothing to do with any of my past associates or Interactive Homes.

The AfterGRASP project started in earnest the last couple days of Feb 2002. That date does not coincide with any “event”; it just happened to be when everyone was comfortable enough with the details to say “OK, let’s start”. However, it does happen to be the one year anniversary of the closing of Gmedia’s doors, which occurred the last few days in Feb 2001.

Rick Franklin of Interactive Homes knows this project exists. He was called a few days before any testers were invited to sign an NDA in July 2002. Rick and Interactive Homes know who is writing it, and who is funding it, and why. There is no ill will between Interactive Homes and Digi-products.

Excluding public code (like ZLIB), AfterGRASP contains no source code from GLPRO. AfterGRASP is a completely new project started from scratch. GLPRO was like an organic growth with new features being attached over the years, many now completely obscure and replaced by newer features. This means AG will likely NEVER match the entire GLPRO feature set. For example, the complex “DATA” commands will likely never be replicated since they are so freeform, making compilation almost impossible.

Q: A differentiator between the GRASP line of tools and other presentation software is the use of an imperative command script to control animations rather than manipulating them within a user interface. Why did you choose this route? How did you approach the design and evolution of the language?

Paul Mace Software was the publisher for GRASP, and pushed hard for a GUI version of GRASP. They wanted some kind of UI like I had done for PCPaint, except to create simple projects. They were right. There was still time to do it, when competitors like Macromedia Director and Adobe Flash were not around or established yet. Due to my own pigheadedness, laziness, and conflict over the direction of marketing/sales, it languished in the early 1990s. I eventually came out with MMGRASP which was really a bundling of a bunch of tools like a relabeled PCPaint as PICTOR, and other image utilities, but no GUI at all. Part of the reinforcement was the GRASP users who I had regular contact with were often doing complex projects that couldn’t be done in a simple GUI. They wanted advanced features like addons, and control of peripherals. This reinforced my own lack of will to “just do something”. I believe if I had started something it would have grown, and the entire thing would have gone in perhaps a different direction.

I believe part of the problem was my lack of a conception of how this would work beyond place some pictures, some text, and move it around. I didn’t have a clear vision for how you would integrate this GUI layout with scripting in a way that allowed you to edit the resulting script and yet still be able to use the GUI to adjust layout after you had made changes to the script.

Jason Gibbs who later founded Gmedia was 100% responsible for the syntax rewrite from GRASP to GLPro. He was really irritated by the seemly random command naming, and wanted to make it all consistent. He was a heavy user of GRASP when he was part of IMS Communications Ltd in Twyford UK, and sold some addons for GRASP via IMS for years before wanting to release a successor to GRASP called GLPro. Jason’s enthusiasm for GRASP and doing GLPro is what got the whole ball rolling on GLPro. This got GLPro on Windows, and a lot of advances, but Jason had little interest in any kind of simple GUI tool.

Unfortunately Jason’s conflicts with the management brought in by investors, with investors, and lawyers is what ultimately led to Gmedia collapsing. Jason hoped to buy up the assets in bankruptcy, but was outbid by customers who were worried about whether Jason would support existing GLPro uses. A legitimate concern. As far as I know Jason left to move to Belize and sadly I’ve not had any contact with him since nor been able to locate him. IMS Communications run by his parents shut down a few years ago.

With AfterGRASP there was little discussion of making GUI tools to generate scripts. I did produce GUI products for layout/editing documents, webpages, email and other content, but never anything to produce new AfterGRASP script/projects. In fact for years we used the GLPro script editor to edit AfterGRASP scripts, and I designed a frontend so it could be used to compile/execute AfterGRASP projects. The one area I did consider a new GUI was for mathematical image transformations. I designed a whole system of performing transformations on images in realtime to do effects like a fisheye lens, or other distortion effects. It was like a DFF image difference sort of engine where each pixel value was defined by a list of expressions done to nearby pixels in the source image. Would be perfect for the sort of parallel processing you get in modern GPUs! The problem was we had no tools to generate these computation masks. I did some command line tools, but it was conceptually too difficult for our graphic artists or script authors at Digi-products to understand what was going on. I did some design work on a GUI to edit these math mask projects, but it got pushed to the wayside when we didn’t have any paying projects that needed any effects it could produce.

Q: In 1991, Dr. Dobb’s Journal published an article you wrote on differential image compression which can be used to compress animations or video. I understand some of the techniques you explore in the article came from a research project you did for IBM and CCM. What were the outcomes of that project?

The final outcome was a demo video of children learning using one of the new IBM Model 30 computers which were a little faster than the original IBM PC. They used the 8mhz 8086 instead of the PC’s 5mhz 8088.

The Model 30 did not have sampled sound, that took an add on card. In early 1987 when I produced the demo there were no popular PCM sound cards for PC yet. The AdLib card which did FM synthesis for effects and music was first released in 1987, and the popular SoundBlaster didn’t ship until 1990.

Simple pixel difference animation’s real benefit was very low CPU overhead and encoding of palette-limited images. MPEG wasn’t around at all until about 6 years later, and couldn’t be played back on such a slow CPU.

The main use for that code was the DFF animation format which was used for GRASP/MMGRASP.

I was told the National Geographic Mammals CD-ROM from 1990 was produced for/by IBM using my DFF video code. It was produced with LinkWay. I don’t know how the video was integrated. Whether LinkWay had support for the video playback directly, or ran an outside EXE for video playback.

Q: Can you describe your general approach to software development and testing? For AfterGrasp, you maintained an update.txt file that seems to be a daily log of your activities and design changes. How did you intend yourself and others to use that file?

I’ve developed several styles of development depending on the team size, and who I was working with.

When I first worked on Graphics Libraries/Tools at CCM I would keep a master copy of all tools and libraries, and update everyone by floppy to their harddrive at the end of the day when I wouldn’t interrupt anyone’s work. After a mishap where I broke something, and caused a couple application developers to lose several hours, I started a staged update process. I had a couple developers work in my office, and I would hand them the updates first. They would test it out on their own projects first. Then I handed it to the more advanced developers to test. Finally after a day I would pass it out to everyone. There were a few hard feelings about some developers not getting the “New Fixes”, but this staged update kept everyone working.

For PCPaint 1.0 I was almost entirely working on my own, and hearing distant feedback. Because PCPaint 1.0 was the simplest version of PCPaint, we got away with shipping it with quite a few small bugs. PCPaint 1.5 got delayed a bit because of the distant testing chain. Mouse Systems insisted in the contract that for PCPaint 2.0 if it wasn’t ready to ship by late summer 1985 I would fly out and live at the Mouse Systems campus until it was “bug free”. This was the only time I’ve ever worked with a whole staff of people testing and managing the development process. We had regular meetings on the most critical problems, and testers constantly pounding on it finding new problems, and testing for regression bugs. This ended up dragging on over a month where I lived out of a motel with no kitchen, and ate out all meals.

For GRASP I started a beta testing BBS where regular testers got access to the latest code and could request new features/fixes. We also communicated through Compuserve in the PICS forum where there was a GRASP area, and also where GIF developers not employed by Compuserve would discuss graphics coding. These places were where I got in the habit of documenting every significant change as a running commentary. I’d discuss fixes, new features, and provide examples in a running log of messages.

For GLPRO I ended up with five primary means of communications:

  1. Phone calls for critical business/personal issues with Gmedia management, Investors and Developers. I had gotten in the habit of long walks starting around 3am when it was cool and quiet. Some of the longest calls were made on those walks, such as when I had a cathartic conversation with the investors as Gmedia was falling apart.
  2. I created an email list for beta testers/users. I couldn’t discuss all details like I had with GRASP beta users. So for instance specifics of the MacOS, Linux, PlayStation and other ports were not made public. An archive of those messages from 1996 to 2003 is found here: https://www.aftergrasp.com/glprolist/.
  3. For GLPRO I got more into this habit of a single text file that kept growing since it was so useful as a record of my thoughts as I added features.
  4. Source control. I setup a server over a IDSL link with Perforce server so all the developers working on GLPRO could access the most recent versions. Sadly we only were using this a few months before Gmedia fell apart, so we didn’t have any kind of reliable backup system working yet.
  5. Constant email communication, largely between Jason Gibbs and myself as well as the other developers. At that time I was using a mixture of OS/2 and Windows 2000, and PMMail as my email client.

For AfterGRASP things shrunk down, I was developing mostly by myself with just some graphic artists and application developers who used AfterGRASP, but didn’t do any programming on the C code. I also switched over to C++ instead of C. Still only using a limited subset of C++, but taking advantage of huge improvements in scope/types. By then I also stopped coding any ASM files. Any limited assembly was done inline in the C code. I stopped using a source control system, and instead starting doing automated source backups with differences so I could trace any significant changes and identify where a bug came from.

The UPDATE.TXT for AfterGRASP became far more important because there were no realistic plans to ever produce a retail product or written manual. In the past all this documenting was intended to be processed into a end user manual. That’s what happened with GRASP, and GLPRO. With AfterGRASP since we eventually gave up on selling producing a manual for a rapidly advancing product that development log became CRITICAL to track how features worked. Often the best information was the runnable examples which could be copied and modified to figure out how to use a complex feature.

When I started working more on actually using AfterGRASP to produce out other products at Digi-products, I ended up documenting my work more in email writing long descriptions of what I was working on, what was fixed, what was planned, and my thoughts on what we should do next.

To this day, I still rely on repeated full backups with differences. Storage is so cheap that keeping everything costs almost nothing on source files. I still use source control occasionally when projects involve other people so we can document each change specifically and track each other’s work.

Q: You’ve worked with different companies and in different roles. Do you have any advice for other developers on the business-side of the job?

Have the conversations you avoid or even dread.

My largest regrets are when I was avoiding talking to people, using intermediaries. I’ve always hated confrontations, or being drawn into conflicts, or talking about subjects I’m embarrassingly ignorant of. An example was I avoided talking directly to the investors funding Gmedia/GLpro. I literally NEVER spoke with them seriously until the company was falling apart. I left all that business to our CEO Jason, and didn’t get involved when he was replaced as head with someone from the investors. When I finally had the BIG conversation with the investors I learned important things about what had been going on, and told them critical information that had been hidden from them. I believe if I had been in regular communication with them, just discussing business and plans, then things would have turned out differently.

This raises another issue. You can take modesty too far. I hated “tooting my own horn”, and allowed others to step up and take credit. This is partly what happened at Gmedia where the investors hadn’t realized how important I was, and how much knowledge I had about all aspects of our business.

Only communicating with those you are comfortable with is a disaster. It has happened a couple times where I avoided talking to people I felt were usurping my authority, or were opposed to my plans. The worst case was when I was still quite young, around 21, and some new middle managers were hired at CCM to manage development. Both were older than me and had traditional computer science education, and it was an awkward situation where it wasn’t clear who was under my supervision. Bob was friendly, and easy to talk to. Tom was fairly stiff and traditional. The conflict with Tom brewed for a while with me avoiding him, and dividing the office. I would only speak with the founders of the company avoiding Tom. Developers working directly under me followed this habit of avoiding Tom, and bad mouthing him in an unprofessional way. I even got into at least one embarrassing shouting argument. Once things were falling apart and it was clear most of the company was being let go I finally got to have a real conversation with Tom. It was a shock. I was the jerk. Tom was trying to make it all work. Tom was dealing with programmers who often slacked off, and he was struggling with me breaking the tools. I really wish I had a few conversations with Tom early on to build up a connection and understanding. None of it mattered from a business stand point, but from a personal standpoint I regret that adversarial relationship. I hope Tom did okay after all that.

Although things have changed a great deal in 40 years, some advice that I would give to anyone starting out is to make connections. Go to events, go to dinners, stay in touch with classmates, offer to help people with problems, be the free tech support, show people not only that you are smart, but also HELPFUL. That you have a reputation for solving problems and getting things done. In the age where AI reviews resumes, and HR often has non-technical requirements for roles, often the only way into some positions is a personal connection. I only regret not keeping some of those connections going, sending a Christmas card, or doing a zoom call is very low cost and you never know where it may lead. Virtually every job I’ve ever done was through a personal connection. Resumes and interviews got me nowhere.

Bend over backwards to not offend those who can help you! An example was my wife and I didn’t want extra kids at our wedding. This was a HUGE mistake. Partly because we had preconceived notions about children that were completely wrong. Now that we have our own children that were dis-invited to weddings/events we understand what a huge inconvenience and insult this can be. We did allow closely related children, so friends who had children were quite upset there would be children, just not theirs. We now realize this implied that we thought their children would be a problem, and took this quite personally. We did back down, and invite all kids, but the small rift never went away, and likely led to me not being called for a couple interesting jobs.

Some final advice I didn’t take, but recommend to others: This is from long ago, from several accomplished people. Do not pursue computer science, but rather pursue some kind of science or engineering you are interested in where you can use your interest in computers. Although I didn’t take this path, if I had gotten a college degree, I think that would have been a better path to more career options, and avoided some of the rapid turnover, aging out, and churn in computer work.

Q: Any other thoughts?

I’ve always been a hermit, going a week or more without stepping out the door, and was quite lucky to meet my wife on an early version of Match.com where email was the primary communication method. I’ve often found myself slipping into random sleeping hours, staying up late, and rolling forward, being up later and later. We had several years where I wasn’t on a fixed schedule, and she wasn’t working. She calls them “The Lost Years”. We watched a lot of TV, played video games, and time just disappeared. We sort of withdrew from the world, gained weight, and got little exercise.

All that changed when we decided to start a family. We both lost a lot of weight, and had our first child in 2009. We regret that lost time, one of my few serious regrets. We would have had more children, and moved to the countryside sooner if we’d had the will to just act! Having children has led to so many social connections that it’s sort of crazy when I think back on how isolated we had become. Even during COVID our social net grew. My wife used the remote education options to get associates and then bachelors degrees, doing so well that they asked her to remote tutor multiple subjects. We tried homeschooling for the kids, but gave up, instead enrolling our children in a tiny private school that was still open for in-person classes. Through that little school our family joined the associated church, and my wife has become the full time assistant-teacher/assistant-principal. I still consider myself a hermit living in the countryside, often going a week without leaving our property, but with a longer list of people we call friends. I’m even forced to take part in occasional social events, which I grudgingly admit is good for me.

Notes

This interview was conducted via email during April and May 2025.