Subject: Re: corba or sockets? From: Erik Naggum <erik@naggum.net> Date: 2000/11/04 Newsgroups: comp.lang.lisp Message-ID: <3182355313545368@naggum.net> * David Bakhash <cadet@alum.mit.edu> | So they started out with 300 bps, and eventually got it up to 56K | bps. Almost 200% increase. Okay. I agree that's impressive. Huh? (/ (* 100 (- 56000 300)) 300) => 18566% increase in my book. I fail to understand why giving increases in percent makes sense when the factor of increase is greater than 2 (100% increase), but I notice people are talking about something that went from 1 to 10 as 1000% increase, when it is clearly 900%, but they also call it a "ten-fold increase", which is correct. Confusing multiplicative and additive increases is much too easy when computing with percentages. | Is the impressive part that, despite the massive increase in data | rate, that the same protocol is used, without breaking down with | timing issues, etc.? The impressive part is their ability to make technology out of science. I am one of those few people who are impressed by science and technology as such. Most people seem to be more impressed by sports achievements and Harry Potter (either the books or the sales). | Also, is it fair to say that they did a good job on an absolute | scale? Yes. | For example, I think it's kinda hard to mess things up for a 300 bps | channel, depending on how noisy it is, but assuming a reasonable SNR. Yes, it _is_ kinda hard, but it _was_ pretty good when they did it. | Do they deserve to be commended just because they made it up to 56K | bps? They have "made it" it up to 10Mbps. | TCP/IP handles datarates of 10M bps, about the same factor over 56K | bps as 56K bps is to 300 bps. TCP is a transport protocol. IP is a network protocol. Modems do not transport or network, they simply move bits across a wire. Incidentally, TCP/IP works pretty well on gigabit networks, too, although the amount of data on a wire with a high bandwidth*latency product is _staggering_, which causes the window size in TCP to be a major problem if the connection is even slightly lossy. Therefore, gigabit TCP introduces the same kinds of redundancy that the Telecom people added to their T3/E3 data link protocols (154Mbps) and above, and which have been used in comparatively low-bandwidth satellite communications for ages, namely a pretty good time distance between the data and its error recovery signalling that makes it possible to recover from outage stretches of up to 10 ms and sometimes more. | Judging protocols is hard business. Yes, almost as hard as reading the specifications. | Protocols are often designed to promise a certain amount of data | integrity (usually very high). If one person designed a protocol | that optimized the hell out of the particular situation to acheive | unbeatable performance, then should his protocol be deemed better or | worse than one which wasn't as efficient, but when the channel | specifications change (e.g. got faster), didn't break down? The funny thing about the real world is that you actually have to deploy _something_. If you deploy, say, copper wire, with certain electrical characteristics all through your service area and there is solid international agreement on those characteristics, you know that you have deployed copper wire with those characteristics and you can tell your customers that they can buy telephones, modems, faxes, whatever, according to those specifications,. We do not live in a world of magic (much TV and Harry Potter to the contrary), so those characteristics are not going to change until somebody goes out there and deployes some new cable at the cost of hundreds of billions of dollars. This is why it is very good engineering and very intelligent use of the science and technologies available to manage DSL on the same kind of wire that used to carry 300 bps only 20 years earlier. I come from a family of engineers on both sides, and despite my university education, I _still_ think highly of the art of engineering, which is precisely that of managing to use the pre-existing physical conditions ever better and more accurately. Squeezing 56kbps out of a twisted copper wire laid 50 years ago in some cases indicates forethought and good engineering 50 years ago and good engineering today. I take even more genuine pleasure in dealing with both the people and their work when such competence and caring is evident, than I do disdain and disgust when dealing with people who display incompetence and carelessness, such as on USENET. | I think a lot of it has to do with what the optimization metric is. | It's probably not that much simpler to judge a protocol than to | judge a person's overall intelligence. Or maybe it's just me. | There are certainly a lot of criteria. How well you deal with the real world is a good criteria in both assessment processes if you ask me. Getting lost in wishes for a world that is easier to live in warns _me_ of low intelligence. Judging what is done in the physical world according to what would have been great in a dream world is also a pretty good sign that someone is not firing on all plugs. But I, too, work in software because the real world of physics and materials science is too hard for me to excel in. That's part of why I'm easily impressed by the people who manage to keep the electricity grid of a whole city up and running while conditioning separately synced mega-volt feeds. On the other hand, it is probably because I try to understand the destructive forces of nature that I find this impressive. Those who think nature is kind just because it is predictable have a hard time getting impressed with those who harness it and actually predict it. #:Erik -- Does anyone remember where I parked Air Force One? -- George W. Bush