Who am I? What did a Computer Scientist do during a typical career: Middleware (1)
Hardware, Software, Middleware!
Photo: Very fashionable middle-wear.
MIddleware? What's that? The stuff in the middle? The Plumbing, the "Glue" between systems. Typically related to integration and interoperability for distributed many-to-many systems and systems of systems, and may include: message queues, message oriented middleware (MOM), integration, ESBs, marshalling, protocol conversion, routing, content based routing, XML validation, security, data transformation, service publication and discovery, SOA, ORBs, Corba, .NET, J2EE, etc.
Middleware has been around for ages, here's a brief "cover letter" of my experiences with "proto-middleware" and beyond.
Proto-middleware
When I was a kid I worked for the local TV repair shop. One of my jobs was putting up TV aerials, what fun (particularly on slippery "A" frame roofs in a high wind). When the 2nd TV station (TV2) in NZ starting being broadcast it was difficult to receive in our small country town as the transmitter was a long way away and the signal couldn't' be picked up on the standard aerial. As a result the TV shop made a good trade by installing very large aerials just for TV2 on the top of extra high poles which gave a fuzzy but watchable signal. Our house had one of the first of these giant aerials installed (by me naturally).From then on I was always curious about "action at a distance" or networks and distributed systems.
My 1st programming job at the end of the 1st year university (1980) was programming data loggers for a company in Wellington, NZ. These were both "space" and "time" displaced devices. You dropped them off in the bush for a few months and came back to retrieve them (and the data). If a local farmer hadn't decided to use them for target practice (shotgun holes were occasionally observed) you took them back to the office and extracted the data from them and analysed it. Sort of very high latency "networks". I programmed one whose purpose was to fly through clouds (on a plane) and measure rain drop density and frequency as quickly as possible, maybe an early form of "Cloud computing"?!
My 2nd job at uni was during 2nd and 3rd years holidays for the Bay of Plenty Electrical Power Board. I designed and prototyped a "ripple control" system which enabled monitoring and control of devices on the power network at a distance.
During my Masters Degree I and a friend (Associate Professor Mark Utting) decided that because the university computer science course hadn't provided any hardware courses we would fill in the gap ourselves with some extra curricular R&D by building our own microprocessor computer. We designed it around a 6809E CPU (one of the 1st microprocessors with a proper memory architecture and a whooping 64KB RAM) and wire-wrapped and debugged it ourselves (we fried one memory controller worth $100 along the way). It had dual 8" floppy disk drives. We borrowed the physics department's oscilloscope for h/w debugging, which somehow got stolen from the locked computer science lab we were using (luckily they left our computer behind). I recall having to front the Vice Chancellor in his massive pent-house office to try and explain what had happened whoops. Once we got the h/w working we designed and bootstrapped and wrote a complete O/S (in BCPL) including disk drivers, utilities, file system, editor, Prolog interpreter and music synthesiser, etc. Because we were interested in harnessing increased processing power for applications such as Machine Learning and Music Synthesis we designed it as a shared-memory multi-processor architecture (around 2KB of fast shared RAM shared between pairs of cpu nodes).
Photos below of this computer (a bit dusty as it's been stored in the garage for years). It was relatively expensive to build at the time as some of the bigger chips cost $100+. I think we spent maybe $2,000+ and 100+ hours designing, building and programming it. It had 51 ICs, the port on the lower left was to connect to another board via the shared memory. It consumed 10A at 5Volts, had an 8Mhz clock speed, and had a maze of wire wrap on the back!
Another photo showing the Digital to Analogue (DAC) converter chip (2nd in from the potentiometer) and CPU (next to the reset switch). Other big chips were the 64K Ram and Dynamic memory controller.
"Are we there there yet"? (to Middleware? Almost)
Our university (Waikato, NZ) was one of the innovative for teaching computer science in the early 1980s in NZ as rather than buying into the shared mainframes that other universities had got sucked into it went out on it's own and bought it's own PDP and then a VAX 11/780. These were fast (relatively) time-sharing computers which made programming a lot more interactive and fun. The whole edit, compile, run, debug cycle was still pretty slow but better than alternatives. A photo of the PDP arriving (by time I arrived in 1980 the VAX had arrived, I was astonished to find PDPs - although highly hacked - still in use for undergraduate computing in use at UNSW when I arrived in 1986 - coincidentally the multi-user resource management system "SHARE" that had been developed to eke this sort of extended life out of it turned up again when I worked for a startup in Sydney called Softway who were commercialising it later in the 1980s/1990s).
I recall that I found a loophole in the university degree regulations which enabled me to skip 1st year computer science (they taught Fortran yuck at the time) giving me a share slot to do 2nd year Physics electronics in my 1st year and statistics which included a programming component giving entry into 2nd year computer science). This also gave me time to beg a computer account of the one of the friendly computer science lecturers and buy a book on Pascal and teach myself a proper language.
However, in order to do this it turned out that the only time that non-enrolled students could get a terminal was "after hours", and anyway the system was so mind bogglingly slow during the day due to all the 1st year programming students, and maths and stats people that it wasn't worth using until after about 10pm. So started my night owl behaviour which continued well into PhD studies at UNSW for the rest of the decade. Postgrad students had the run of the "Bat cave" which was a windowless basement hollowed out storage room with very little ventilation. It became popular from midnight until dawn with MSc students.
One of my 1st encounters with the VAX VMS 11/790 operating system features was from programming a prototype multi-player game using VAX Mailboxes for Inter process communication. These opened up a whole new conceptual world to me as I realised you could easily (?) get programs to talk to each other (at least running on the same computer). Were Mailboxes middleware? Maybe, this article from 1992 refers to middleware running over Mailboxes.
The VAX and I (MSc graduation photo):
I moved to Australia in 1986 to start a PhD (in Machine Learning) at the computer science department at UNSW in Sydney. During some of this time I was the senior tutor in networked systems (networks and distributed UNIX file/operating systems) for Associate Professor John Lions's course. I recall this was a rapid and extremely in depth introduction to all things UNIX and networking and distributed as up until then I hadn't used or even programmed UNIX (I learned fast as I didn't want to be embarrassed in front of tutorial classes!). The course covered networking as well as distributed databases, computation, operating systems, and file systems. Whoops it appears that "my" copy of the text book (Tanenbaum) was actually John's (it has his name in the front).
I was also using the departments new UNIX machine a Pyramid (called Cheops obviously) for programming for PhD work which was the first computer I had access to that had real internet access (I think. I suspect the VAX at Waikato has limited access via a hotch potch of possible store and forward networks, as I seem to recall that email sort of worked by 1985 - but tooks days as machines only connected at night - and maybe we could get files remotely, see this interesting article on history of the internet in NZ No.8 wire networks ha ha). Here's something else made from No.8 fencing wire (sort of a network!):
I was curious to know when people starting referring to middleware. This primer on "emerging technologies" from 1992 is titled "Middleware: networking's postal service".
From about 2004 there are predictions about the death of middleware (just google middleware death), particularly in the context of the rise of Web services, APIs, PaaS, Cloud, micro-services or the "next big thing". The worrying thing about some of the current trends however is that developers end up having to reinvent lots of things that middleware handled previously (e.g. transactions, security, etc), and in reality a lot of the middleware technologies are now available as services in vendors cloud platforms (e.g. AWS). This eventually leads me to real middleware...
Middleware
Fast forward about a decade (I'll skip over other stuff I did related more to distributed systems and networking as I don't think it adds much to the story really, I guess the WWW was invented during that time to).
In late 1999 I moved from the CSIRO software engineering initiative (based on my recent Java experience mainly) to a more research oriented role with the Division of Maths and Stats (later the ICT centre) working for Dr Ian Gorton on a new Advanced Distributed Software Architectures and Technologies Project (later renamed to Software Architecture and Component Technologies). I was involved in the Middleware Technology Evaluation Project which was designed to conduct rigorous testing (benchmarking) of COTS technologies in the enterprise space to understand tradeoffs in the use of "standards" based technologies (like middleware) where different vendors may have implemented the standards in different ways and with different performance and scalability characteristics. I.e. If we wrote a single benchmark application according to the standard (possibly with architectural variants) would (a) run and (b) how fast? on each vendors implementation?
Looking at architectural variations was a key aspect to this project as often the standards suggested or allowed more than one pattern of use for the various component types, which would work better and why and what were the tradeoffs? We started with a benchmarked designed for Corba (written in C) to emulate on online stock brokering system. I was involved in porting this to Enterprise Java (J2EE as it was then known) and designing how the architectural alternatives work work with a single code base. The main alternatives were different Entity Bean (EJB) persistence models (E.g. Container Managed Persistence, CMP, and Bean Managed Persistence, BMP), but also use of Stateful Session Beans, and number of servers in the AppServer cluster (1 or 2). I was also involved in setting up the testbed (h/w, database, test drivers, etc), and deployment of the benchmark onto multiple vendor products (or trying in some cases without success). I was the product expert for deploying and debugging the benchmark onto the Borland J2EE AppServer, SilverStream J2EE Server, and Sun J2EE AppServer (with no success). I also had some experience with the ECPerf benchmark (leading to involvement in the SPEC Java committee). I was involved in running the benchmarks and variants on multiple products (as the h/w was in the Canberra lab so all the vendors products had to be installed, the benchmark deployed and debugged, and then run multiple times for each architectural variant and results collected and analysed). I was also involved in setting up and configuring the database drives, and the JVMs which turned out to be more significant than first realised. A lot of effort finally went into the setting up and tuning of JVMs and jdbc drives, as we found that for JVMs the vendor product, type of JVM, and Garbage collection settings, number of containers and JVMs and CPU cores, etc had significant impacts on performance and scalability. It was also time consuming to do and sometimes we broke the JVM. I found, reported and worked with SUN to fix a severe scalability flaw in a new version of their JVM related to thread management (from memory).
We migrated the benchmark through several changes in the J2EE standard (e.g. EJB versions), and had planned and prototyped enhancements including the use of JMS and Web Services in the benchmark.
Because I had become an expert on the J2EE standards and performance engineering during these experiments, and some exposure to ECPerf, I was invited to represent CSIRO on the Standard Performance Evaluation Corporation (SPEC) Java committee. For several years I was involved in the development of the SPECjAppServer benchmarks (2001, 2002?), and reviewing of member submissions.
During this time I also conducted and published and presented research around J2EE performance and scalability at international conferences and in journals, presented at industry and professional conferences and training events, and edited the 2nd edition of our detailed report and analysis on the J2EE products (published by CSIRO and Cutter).
I also conducted work and wrote reports for several consultancies (e.g. Fujitsu and Borland around performance and scalability and compliance to J2EE standards. E.g. Fujitsu had interpreted the standard strangely and required every different EJB to be deployed in a separate container making deployment a nightmare. Was this compliant or not? It did run fast!).
I also developed a research proposal to conduct a performance and scalability evaluation of J2EE products with "novel" architectures in conjunction with INRIA and ObjectWeb (their Fractal J2EE server used novel internal architectural mechanisms). This joint proposal for travel funds from the Australian Academy of Science and French Embassy Fellowship scheme was successful, but I was unable to take it up due to changes in CSIRO project structures.
Other research on these platforms also included code instrumentation and JVM profiling to determine how long was spent in each sub-system in order to understand the performance and scalability characteristics better. I also discovered that by having sufficient information it is possible to approximately model and predict the performance and scalability under different loads, and also provided unique insights into potential bottlenecks and the potential speed-up if reduced, and why some of the vendor products had better performance or scalability than others. This was a very early precursor to my work with service-oriented performance modelling in NICTA from 2007 onwards. I also supervised several students during this work including an evaluation of ebXML middleware, and an experimental analysis to understand the interaction of J2EE application object lifetimes on the JVM garbage collector strategies and settings and performance and scalability.
I presented published papers and attended workshops at: Middleware 2001, IFIP/ACM International Conference on Distributed Systems Platforms Heidelberg, Germany, 2001; Middleware 2005, ACM/IFIP/USENIX 6th International Middleware Conference, Grenoble, France, 2005; and IPDPS 2003, International Parallel and Distributed Processing Symposium, 2003, Nice, France.
In late 1999 I moved from the CSIRO software engineering initiative (based on my recent Java experience mainly) to a more research oriented role with the Division of Maths and Stats (later the ICT centre) working for Dr Ian Gorton on a new Advanced Distributed Software Architectures and Technologies Project (later renamed to Software Architecture and Component Technologies). I was involved in the Middleware Technology Evaluation Project which was designed to conduct rigorous testing (benchmarking) of COTS technologies in the enterprise space to understand tradeoffs in the use of "standards" based technologies (like middleware) where different vendors may have implemented the standards in different ways and with different performance and scalability characteristics. I.e. If we wrote a single benchmark application according to the standard (possibly with architectural variants) would (a) run and (b) how fast? on each vendors implementation?
Looking at architectural variations was a key aspect to this project as often the standards suggested or allowed more than one pattern of use for the various component types, which would work better and why and what were the tradeoffs? We started with a benchmarked designed for Corba (written in C) to emulate on online stock brokering system. I was involved in porting this to Enterprise Java (J2EE as it was then known) and designing how the architectural alternatives work work with a single code base. The main alternatives were different Entity Bean (EJB) persistence models (E.g. Container Managed Persistence, CMP, and Bean Managed Persistence, BMP), but also use of Stateful Session Beans, and number of servers in the AppServer cluster (1 or 2). I was also involved in setting up the testbed (h/w, database, test drivers, etc), and deployment of the benchmark onto multiple vendor products (or trying in some cases without success). I was the product expert for deploying and debugging the benchmark onto the Borland J2EE AppServer, SilverStream J2EE Server, and Sun J2EE AppServer (with no success). I also had some experience with the ECPerf benchmark (leading to involvement in the SPEC Java committee). I was involved in running the benchmarks and variants on multiple products (as the h/w was in the Canberra lab so all the vendors products had to be installed, the benchmark deployed and debugged, and then run multiple times for each architectural variant and results collected and analysed). I was also involved in setting up and configuring the database drives, and the JVMs which turned out to be more significant than first realised. A lot of effort finally went into the setting up and tuning of JVMs and jdbc drives, as we found that for JVMs the vendor product, type of JVM, and Garbage collection settings, number of containers and JVMs and CPU cores, etc had significant impacts on performance and scalability. It was also time consuming to do and sometimes we broke the JVM. I found, reported and worked with SUN to fix a severe scalability flaw in a new version of their JVM related to thread management (from memory).
We migrated the benchmark through several changes in the J2EE standard (e.g. EJB versions), and had planned and prototyped enhancements including the use of JMS and Web Services in the benchmark.
Because I had become an expert on the J2EE standards and performance engineering during these experiments, and some exposure to ECPerf, I was invited to represent CSIRO on the Standard Performance Evaluation Corporation (SPEC) Java committee. For several years I was involved in the development of the SPECjAppServer benchmarks (2001, 2002?), and reviewing of member submissions.
During this time I also conducted and published and presented research around J2EE performance and scalability at international conferences and in journals, presented at industry and professional conferences and training events, and edited the 2nd edition of our detailed report and analysis on the J2EE products (published by CSIRO and Cutter).
I also conducted work and wrote reports for several consultancies (e.g. Fujitsu and Borland around performance and scalability and compliance to J2EE standards. E.g. Fujitsu had interpreted the standard strangely and required every different EJB to be deployed in a separate container making deployment a nightmare. Was this compliant or not? It did run fast!).
I also developed a research proposal to conduct a performance and scalability evaluation of J2EE products with "novel" architectures in conjunction with INRIA and ObjectWeb (their Fractal J2EE server used novel internal architectural mechanisms). This joint proposal for travel funds from the Australian Academy of Science and French Embassy Fellowship scheme was successful, but I was unable to take it up due to changes in CSIRO project structures.
Other research on these platforms also included code instrumentation and JVM profiling to determine how long was spent in each sub-system in order to understand the performance and scalability characteristics better. I also discovered that by having sufficient information it is possible to approximately model and predict the performance and scalability under different loads, and also provided unique insights into potential bottlenecks and the potential speed-up if reduced, and why some of the vendor products had better performance or scalability than others. This was a very early precursor to my work with service-oriented performance modelling in NICTA from 2007 onwards. I also supervised several students during this work including an evaluation of ebXML middleware, and an experimental analysis to understand the interaction of J2EE application object lifetimes on the JVM garbage collector strategies and settings and performance and scalability.
I presented published papers and attended workshops at: Middleware 2001, IFIP/ACM International Conference on Distributed Systems Platforms Heidelberg, Germany, 2001; Middleware 2005, ACM/IFIP/USENIX 6th International Middleware Conference, Grenoble, France, 2005; and IPDPS 2003, International Parallel and Distributed Processing Symposium, 2003, Nice, France.
Publications, presentations and reports from this Middleware Technology Evaluation work included:
- Paul Brebner, Emmanuel Cecchet, Julie Marguerite, Petr Tuma, Octavian Ciuhandu, Bruno Dufour, Lieven Eeckhout, Stéphane Frénot, Arvind S. Krishna, John Murphy, Clark Verbrugg, Middleware Benchmarking: Approaches, Results, Experiences, Concurrency and Computation: Practice and Experience, Volume 17, Issue 15, pages 1799-1805, 25 December 2005 (Published Online 28 June 2005). http://dx.doi.org/10.1002/cpe.918
- Paul Brebner, Jeffrey Gosper, “The J2EE ECperf benchmark results: transient trophies or technology treasures?”, Concurrency and Computation: Practice and Experience Volume 16, Issue 10, Pages 1023 – 1036, July 2004.
- Paul Brebner, Jeffrey Gosper, "J2EE infrastructure scalability and throughput estimation", SIGMETRICS Performance Evaluation Review, Volume 31, Number 3, December 2003.
- Paul Brebner, Jeffrey Gosper, "How Scalable is J2EE Technology?", ACM SIGSOFT Software Engineering Notes, Volume 28, Issue 3, May 2003.
- Paul Brebner, Ben Logan, Project JebX: A Java ebXML Experience, Third International Workshop on Internet Computing and E-Commerce (ICEC 2003, April, Nice, France, 2003), Internat ional Parallel and Distributed Processing Conference (IPDPS 2003).
- Ian Gorton, Anna Liu, Paul Brebner, “Rigorous Evaluation of COTS Middleware Technology”, in: IEEE Computer, March 2003, pages 50-55.
- Paul Brebner, The Impact of Object Oriented Characteristics of Middleware Benchmarks, Abstract of Invited Position Paper for OOPSLA 2003 Middleware Benchmarking Workshop, http://d3s.mff.cuni.cz/conferences/oopsla2003/Brebner.pdf
- Brebner, P. (Ed.), Evaluating J2EE Application Servers - Version 2.1, July 2002, CSIRO Publishing and Cutter Consortium.
- Shuping Ran, Doug Palmer, Paul Brebner, Shiping Chen, Ian Gorton, Jeffrey Gosper, Lei Hu, Anna Liu, Phong Tran, J2EE Technology Performance Evaluation Methology, Distributed Objects and Applications 2002 (DAO’02), University of California At Irvine, pp13-16
- Contributed to the SPEC/OSG Java Subcommittee industry standard J2EE benchmarks: SPECjAppServer2001 and SPECjAppServer2002.https://www.spec.org/jAppServer2001/press_release.html, https://www.spec.org/jAppServer2002/press_release.html
- Paul Brebner and Shuping Ran, “Entity Bean A, B, C's: Enterprise Java Beans Commit Options and Caching”, Proceedings of IFIP/ACM International Conference on Distributed Systems Platforms, Heidelberg, Germany, November 2001, LNCS 2218, Springer-Verlag, pp 36-55.
- Shuping Ran, Paul Brebner, Ian Gorton, The Rigorous Evaluation of Enterprise Java Bean Technology, 15th International Conference on Information Networking (ICOIN-15), 2000
- Published Open Source version of CSIRO StockOnline J2EE benchmark and run instructions, http://forge.ow2.org/projects/stock-online/
- Smith, T., Brebner, P., "Enterprise Java Application Profiling: Object lifetimes and Garbage Collection in J2EE Applications", CMIS Technical report, 2003.
- Paul Brebner, Is your AppServer being crippled by the JVM?, Invited talk at BorCon2002, in proceedings of the 5th Annual Borland Conference Asia Pacific, 2002, Sydney.
- Paul Brebner, J2EE Architecture and Product Best Practices: Determining the benefits of working with J2EE, Invited talk at J2EE, .NET and Enterprise Application Summit (IIR conference), 2002.
- CSIRO/SEA MTE Seminar series: J2EE, What’s it all about, Paul Brebner, Melbourne, Sydney, Brisbane, 2002
- CSIRO/SEA MTE Seminar series: Getting the most out of your J2EE Application Server, Paul Brebner, Melbourne, Sydney, Brisbane, 2002.
- Brebner, P., Evaluating J2EE Application Servers: INTERSTAGE Executive Summary, March 2002 (CSIRO consultancy report for Fujitsu).
- Brebner, P., Analysis of INTERSTAGE Application Server J2EE standards compliance, 2002 (consultancy report for Fujitsu).
- Brebner, P., Evaluating J2EE Application Servers: Borland Enterprise Server Executive Summary, June 2002 (CSIRO consultancy report for Borland).
- Brebner, P., Service Oriented Architecture Management (SOAM) by Proxy Management of Web Services (PMOWS), Project funding proposal, CMIS Technical Report February 2002.
- Paul Brebner, Enterprise SOA integration architectural options analysis for NSW Department of Planning, CSIRO consultancy report, 2002.
- Published Open Source ECPerf kit for JBoss Application Server.
- Ian Gorton, Paul Brebner, Shuping Ran, Shiping Chen, Anna Liu, Doug Palmer, Evaluating J2EE Application Servers - Version 1.1, September 2001, CSIRO Publishing and Cutter Consortium, 96 pages.
- Paul Brebner, invited talk at BorCon2001, How to choose an Application Server, in proceedings of the 4th Annual Borland Conference Asia Pacific, 2001, Melbourne.
- Ran, S, Gorton, I, Tran, P and Brebner, P, Evaluating Borland Application Server Technology, VisiBroker ITS v1.2, Borland AppServer V4.5.1, Version 1.0, September 2000
I was on the PC or an invited reviewer for workshops and journals (up until 2007) including:
- International Workshop on Engineering Service-Oriented Applications (WESOA 2006, co-located with ICSOC 2006),
- Workshop on MOdel Driven Development for Middleware (MODDM) at Middleware 2006
- 2nd International Workshop on Engineering Service-Oriented Applications, 2006
- Workshop on Model Driven Development for Middleware (2006)
- Session chair for 3rd International Working Conference on Component Deployment 2005 (CD 2005, co-located with Middleware 2005)
- Organising Committee for OOPSLA Component and Middleware Performance Workshop (2003, 2004)
- TES'02 - 3rd VLDB Workshop on Technologies for E-Services (2002).
- IEEE Distributed Systems Journal
- IBM Systems Journal (paid)
- VLDB Journal (2002-2004).
Comments
Post a Comment