Who am I? What did a computer scientist do during a typical career: Middleware (2), Grid, Web Services, ESBs, Cloud and beyond!
Hardware, Software, Middleware (Part 2)
Grid Middleware
The Middleware Technology Evaluation project at CSIRO was phased out in 2003 and I was moved to another project in the Grid space, architecting scientific grid computations (on a Grid cluster computer) using web services. I also developed a funding proposal for middleware for managing the SLAs of web services based on a combination of monitoring, autonomic computing, and elastic resourcing (e.g. deploying services dynamically to servers with spare capacity to meet demand and SLA requirements - a bit "cloud like" perhaps).
Based on my previous work with J2EE middleware R&D and evaluations, and the Grid architecture work, I was invited to UCL for a year ("of benefit to CSIRO" so could take leave and come back again) to work for Professor Wolfgang Emmerich on an EPSRC funded UK eScience project to manage the evaluation of Grid middleware based on the OGSA technology stack across 4 locations.
This was an interesting project managing distributed resources and providing the main technical input. I also interacted with stakeholders and other interested parties in the UK eScience area, and presented regular technical updates (e.g. at Oxford, town hall meetings, etc). I managed the installation, configuration, securing, testing, and application/benchmark development, deployment and execution and analysis. We deployed a fully functional OGSA middleware infrastructure across the 4 locations (2 in London, Newcastle and Edinburgh) which included local and centralised service registries for infrastructure and end-user service discovery and consumption. We experiment with mechanisms to supplement with OGSA middleware in order to deploy, secure, distributed, look up, and consume end-user web services across the 4 sites which included the ability to load balance the services across the distributed resources. Some of our observations about missing features and architectural limitations included aspects that have since become common in cloud such as metering and billing of resource usage, ability to deploy end-user web services across the resources and load balance them, virtualisation to assist with security and isolation and end-user management of applications, and different classes and pricing of resources for batch vs. interactive jobs etc.
Once I had returned to CSIRO ICT Centre I finished a detailed architectural tradeoff evaluation based on resource-centric vs. service-centric features of the OGSA middleware (and our additions) which was published in the Journal of Grid Computing. Surprisingly the UCL OGSA evaluation project web site is still available! OGSA evolved into WSRF but which appears to have been a dead end due to a non-REST Web Services view of the world and incompatibility with other related standards.
Note that it seems that Grid computing is still alive and kicking. At the AWS Sydney Summit 2017 recently Adrian White gave an interesting talk on scientific innovation using AWS (Link to slides for "Risk Management and Particle Accelerators"). Scientific organisations (e.g. CSIRO) have deployed Grid middleware (including the typical large scale Grid resource management systems such as Condor) onto AWS, using it to spin up large numbers of instances quickly to solve big problems in short periods of time (i..e rather than take longer on fixed resources they are using the maximum possible resources for the shortest period of time).
Note that it seems that Grid computing is still alive and kicking. At the AWS Sydney Summit 2017 recently Adrian White gave an interesting talk on scientific innovation using AWS (Link to slides for "Risk Management and Particle Accelerators"). Scientific organisations (e.g. CSIRO) have deployed Grid middleware (including the typical large scale Grid resource management systems such as Condor) onto AWS, using it to spin up large numbers of instances quickly to solve big problems in short periods of time (i..e rather than take longer on fixed resources they are using the maximum possible resources for the shortest period of time).
MULE ESB, RFID, and OGC Middleware
The CSIRO ICT cluster grid project had finished by the time I returned so I worked in the role of the centre integration architect for a while, looking at ways to sensibly build a platform from the various technology stacks coming out of the different research groups and integrating them to work together to solve new and more comprehensive problems. I developed a prototype, integrating several of the technologies using the MULE ESB.
I was involved for a while with the (theoretical) architectural evaluation of RFID middleware (e.g. distributed ALE architectures etc).
The next significant middleware project I worked on was to manage a contracted theoretical and experimental evaluation and report of the Open Geospatial Consortium (OGC) Sensor Web Enablement standards and technologies. This was a complex set of interacting XML based web and GIS-aware standards and open source middleware for collecting, organising, searching, managing, and disseminating spatial and sensor data in a distributed style across multiple locations and brokers. We developed some benchmark applications to inject sensor data and process it on the fly, store some of it in databases, and combine the stored and real-time results with queries for real-time processing in conjunction with historical data (trends etc).
This turned out to be fairly demanding of both the standards and the middleware and we discovered performance and scalability issues and architectural/middleware issues (e.g. it turned out to be easy but hard to detect or remote "event loops" caused by brokers subscribing from each other's data sets). I also investigated changes to some of the standards to enable "demand based subscriptions" for events and modelled alternatives to understand potential improvements in performance and scalability and resource usage (e.g. being able to subscribe and/or receive notification to only changes to values by a certain minimum percentage or once in a specified time period, etc). I also evaluated open source and commercial complete/event stream processing middleware products in conjunction with this project (wrote benchmarks for, testing under increasing load, analysed behaviour etc) such as Coral8, Esper, etc. Note that some of the ideas from these standards and technologies have become mainstream through numerous open source data analytics and have also found there way into cloud technologies (E.g. Amazon Kinesis).
Pervasive Enterprise Middleware
For the last 10 years I've been a senior researcher with NICTA and then CTO of a start-up specialising in performance engineering (via a performance modelling tool which automatically builds performance models and does predictive analytics from APM data) to address enterprise and cloud performance, scalability, capacity and resource usage/price risks. Unsurprisingly most of our clients have had Middleware of some form or other as it is now pervasive throughout enterprises. And SOAs and Microservice architectures are all a type of middleware (but maybe the middleware is on the outside?). Sometimes it has been the source of actual or potential problems as it may be the bottleneck, or they may not have sufficient monitoring visibility into what happens beyond the ESB boundaries.
I proposed a project and supervised a NICTA vacation scholar several years ago to build a benchmark for a distributed MULE ESB configuration and run load tests with monitoring and instrumentation in place. I then build performance models from the data and load test results and determined that we were able to make predictions about resource usage, thread concurrency, response times and scalability. This would make it easier to architect, validate, tune and resource ESB based applications in advance.
Several clients had technology stacks which included Enterprise Java. One in particular was interesting (ARC Research Grants systems) as they were using a sophisticated model-drive development environment to specify the system and then automatically generating J2EE code. We were initially planning on using published work (from NICTA but not our work) to build performance models based on knowledge of the number of persistent operators and time take per operator (CRUD). However, we conducted an initial PoC with this approach and it just didn't work (mainly because the J2EE persistence optimisation mechanisms had become more sophisticated making some aspects harder to understand). We ended up putting a complete copy of the model-driven environment and the generated code onto a server in our lab, building a prototype web/JMX based J2EE monitoring tool which provided per transaction and tier performance data, and conducting experiments with multiple performance modeling approaches. We had been hoping to be able to do something automatically from the model-driven specifications, but eventually we had to make do with using the performance data only (as this provided the "definitive" information required). This put us on track for automatic building of performance models from APM data (E.g. Dynatrace) several years later as part of the NICTA tech startup.
Several clients had technology stacks which included Enterprise Java. One in particular was interesting (ARC Research Grants systems) as they were using a sophisticated model-drive development environment to specify the system and then automatically generating J2EE code. We were initially planning on using published work (from NICTA but not our work) to build performance models based on knowledge of the number of persistent operators and time take per operator (CRUD). However, we conducted an initial PoC with this approach and it just didn't work (mainly because the J2EE persistence optimisation mechanisms had become more sophisticated making some aspects harder to understand). We ended up putting a complete copy of the model-driven environment and the generated code onto a server in our lab, building a prototype web/JMX based J2EE monitoring tool which provided per transaction and tier performance data, and conducting experiments with multiple performance modeling approaches. We had been hoping to be able to do something automatically from the model-driven specifications, but eventually we had to make do with using the performance data only (as this provided the "definitive" information required). This put us on track for automatic building of performance models from APM data (E.g. Dynatrace) several years later as part of the NICTA tech startup.
The final project I will highlight involved a multi-year multi-phase architectural analysis and de-risking project for Defence RPDE/DSTO/CIOG focussing on a proposed ISR middleware system using ESB technologies. Initial work focussed on ESB vendor interoperability and performance modelling to validate the results. Subsequent activities focussed on examining architectural/topological alternatives for a single vendor ESB (IBM Triton ESB), but then conducting in-house laboratory benchmarking of a proxy ESB (MULE) for similar patterns and topologies of use. The results of these experiments were used to parameterise a performance model which also included workflow scenarios, workloads, and node and user locations and network data etc obtained from other sources. The resulting performance models were scaled up for larger numbers of nodes to replicate possible deployment patterns and numbers, and workloads etc, to conduct sensitivity analysis, and make predictions about performance, scalability, and capacity/cost of the proposed system. See my cloud blog for more details.
A google of "AWS middleware" gives about 500,000 hits (although some of these may be middleware to assist with migration to AWS, or middleware that can be deployed onto AWS). But I think that still makes the point, middleware isn't dead it's just "lurking" in the details.
Microservices architectures have become the next big thing. The theory is that more smaller services are better (for DevOps anyway). During ICPE2016 in Delft last year I had the opportunity to hear several talks on SOA, microservices migrations, modelling and performance, and participate in some discussions around this space. As a result I did some preliminary work with modelling the performance and scalability and resource implications of moving from a typical SOA (using existing customer APM data and performance models we had built) to microservices architectures incrementally. Some of the variables I looked at included how many complex services there are and how many (redundant) services they call (as this tends to result in the Zipf distribution effect observed in many SOAs where only a few services had the most service demand), how the services are sliced and diced to turn them into microservices, and how much overhead there is per microservice that was previously "absorbed" by having coarser grained services, and how many original services are actually retired vs. kept in use, etc. It turns out that initially at least the complexity, service demand and response times may go up, so watch out. I haven't completed or published this work yet but if you think it may still be interesting let me know :-) Se this blog under Zipf's law for more info.
A couple of articles on microservices and middleware are:
https://dzone.com/articles/relation-of-middleware-to-docker-and-cloud-native
https://www.voxxed.com/blog/2015/01/good-microservices-architectures-death-enterprise-service-bus-part-one/
Cloud as Middleware
Doing the AWS solution architecture certification it seems to me that AWS is now a "full stack" middleware provider. Both in terms of the complexity (number of services and APIs) and type of services offered. With REST services in theory everything can interoperate with everything else (although doesn't this just end up with complex point to point spagetti integration patterns?)A google of "AWS middleware" gives about 500,000 hits (although some of these may be middleware to assist with migration to AWS, or middleware that can be deployed onto AWS). But I think that still makes the point, middleware isn't dead it's just "lurking" in the details.
Microservices as Middleware?
This is the "beyond" idea...Microservices architectures have become the next big thing. The theory is that more smaller services are better (for DevOps anyway). During ICPE2016 in Delft last year I had the opportunity to hear several talks on SOA, microservices migrations, modelling and performance, and participate in some discussions around this space. As a result I did some preliminary work with modelling the performance and scalability and resource implications of moving from a typical SOA (using existing customer APM data and performance models we had built) to microservices architectures incrementally. Some of the variables I looked at included how many complex services there are and how many (redundant) services they call (as this tends to result in the Zipf distribution effect observed in many SOAs where only a few services had the most service demand), how the services are sliced and diced to turn them into microservices, and how much overhead there is per microservice that was previously "absorbed" by having coarser grained services, and how many original services are actually retired vs. kept in use, etc. It turns out that initially at least the complexity, service demand and response times may go up, so watch out. I haven't completed or published this work yet but if you think it may still be interesting let me know :-) Se this blog under Zipf's law for more info.
A couple of articles on microservices and middleware are:
https://dzone.com/articles/relation-of-middleware-to-docker-and-cloud-native
https://www.voxxed.com/blog/2015/01/good-microservices-architectures-death-enterprise-service-bus-part-one/
Publications from this work
Paul Brebner, Automatic Performance Modelling from APM Data; Past Experiences and Future Opportunities, invited presentation at Workshop on Performance and Reliability (WOPR25), Wellington, NZ, February 15-17, 2017.
Paul Brebner, Automatic Performance Modelling from Application Performance Management (APM) Data: An Experience Report, 7th ACM/SPEC International Conference on Performance Engineering (ICPE2016), Delft, the Netherlands, March 12-16, 2016.
Paul Brebner, Recent Experiences and Future Challenges Using Automatic Performance Modelling to Complement Testing, invited presentation” The Fifth International Workshop on Large-Scale Testing (LT 2016), Co-located with the 7th ACM/SPEC International Conference on Performance
Engineering (ICPE2016), Delft, the Netherlands, March 12-16, 2016.
Paul Brebner, Jon Gray, System and a Method for Modelling the Performance of Information Systems, Australian Innovation Patent Number 2015101031, 2015.
Brebner, P. C. 2012. Experiences with early life-cycle performance modeling for architecture assessment. In Proceedings of the 8th international ACM SIGSOFT Conference on Quality of Software Architectures (Bertinoro, Italy, June 25 - 28, 2012). QoSA '12. ACM, New York, NY, 149-154. DOI= http://doi.acm.org/10.1145/2304696.2304721
Brebner, P. C. 2012. A performance modeling "blending" approach for early life-cycle risk mitigation. In Proceedings of the 3rd ACM/SPEC international Conference on Performance Engineering (Boston, Massachusetts, USA, April 22 - 25, 2012). ICPE '12. ACM, New York, NY, 271-274. DOI= http://doi.acm.org/10.1145/2188286.2188336
Brebner, P. C. 2012. Is your cloud elastic enough?: performance modelling the elasticity of infrastructure as a service (IaaS) cloud applications. In Proceedings of the 3rd ACM/SPEC international Conference on Performance Engineering (Boston, Massachusetts, USA, April 22 - 25, 2012). ICPE '12. ACM, New York, NY, 263-266. DOI= http://doi.acm.org/10.1145/2188286.2188334
Brebner, P. C. 2011. Real-world performance modelling of enterprise service oriented architectures: delivering business value with complexity and constraints. SIGMETRICS Perform. Eval. Rev. 39, 3 (Dec. 2011), 12-12. DOI=http://doi.acm.org/10.1145/2160803.2160813
Brebner, P. C. 2011. Real-world performance modelling of enterprise service oriented architectures: delivering business value with complexity and constraints. In Proceedings of the 2nd ACM/SPEC international Conference on Performance Engineering (Karlsruhe, Germany, March 14 - 16, 2011). ICPE '11. ACM, New York, NY, 85-96. DOI= http://doi.acm.org/10.1145/1958746.1958762
Paul Brebner, Is your Cloud Elastic Enough? Part 1. CMG Measure IT, Issue 2, 2011. http://www.cmg.org/wp-content/uploads/2011/08/m_82_3.pdf
Paul Brebner, Is your Cloud Elastic Enough? Part 2. CMG Measure IT, Issue 3, 2011. http://www.cmg.org/wp-content/uploads/2011/10/m_84_3.pdf
Brebner, P. and Liu, A. 2011. Performance and cost assessment of cloud services. In Proceedings of the 2010 international Conference on Service-Oriented Computing (San Francisco, CA, December 07 - 10, 2010). Springer-Verlag, Berlin, Heidelberg, 39-50.
Paul Brebner, Anna Liu, Modeling Cloud Cost and Performance, Annual International Conference on Cloud Computing and Virtualization (CCV 2010), Singapore, 2010.
Brebner, P. 2009. Service-Oriented Performance Modeling the MULE Enterprise Service Bus (ESB) Loan Broker Application. In Proceedings of the 2009 35th Euromicro Conference on Software Engineering and Advanced Applications (August 27 - 29, 2009). IEEE Computer Society, Washington, DC, 404-411. DOI= http://dx.doi.org/10.1109/SEAA.2009.57
Paul Brebner, Liam O'Brien, Jon Gray: “Performance modeling evolving Enterprise Service Oriented Architectures”. In Software Architecture, 2009 & European Conference on Software Architecture. WICSA/ECSA 2009. Joint Working IEEE/IFIP Conference on. 14-17 Sept. 2009. 71 – 80. DOI=http://dx.doi.org/10.1109/WICSA.2009.5290793
Brebner, P., O’Brien, L, Gray, J., “Performance modeling power consumption and carbon emissions for Server Virtualization of Service Oriented Architectures (SOAs)”. EDOCW 2009. 13th. 92-99.
Paul C. Brebner, Liam O'Brien, Jon Gray. 2008. Performance modeling for service oriented architectures. In Companion of the 30th international conference on Software engineering (ICSE Companion '08). May 10-18, Leipzig, Germany, 2008. ACM, 953-954. DOI=http://dx.doi.org/10.1145/1370175.1370204
Liam O'Brien, Paul Brebner, Jon Gray. Business Transformation to SOA: Aspects of the Migration and Performance and QoS Issues. 2nd International Workshop on Systems Development in SOA Environments. SDSOA 2008. May 11, 2008.
Paul Brebner, Liam O'Brien, Jon Gray. Performance Modelling for e-Government Service Oriented Architectures (SOAs). 19th Australian Software Engineering Conference. ASWEC 2008. 25-29 March 2008, Perth, Australia. Experience Report Proceedings. pp. 130-138. Presentation.
Quan Z. Sheng, Kerry L. Taylor, Zakaria Maamar, and Paul Brebner. Research in RFID Data: Issues, Solutions, and Directions (Chapter), in: The Internet of Things: from RFID to Pervasive Networked Systems. Editor: L. Yan etl al. Auerback Publications. ISBN: 978-1420052817. February 2008.
Taylor, K., Brebner, P., Kearney, M., Zhang, D., Lam, K., Tosic, V., "Towards Declarative Monitoring of Declarative Service Compositions", in Proc. of the Second International Workshop on Services Engineering (SEIW 2007), ICDE 2007, Istanbul, Turkey, April 16, 2007.
P. Brebner and W. Emmerich, "Two Ways to Grid: The contribution of Open Grid Services Architecture (OGSA) mechanisms to Service-centric and Resource-centric lifecycles", Journal of Grid Computing, Issue: Online First, Springer, 31st January 2006. (pre-publication version)
Paul Brebner, Wolfgang Emmerich, "Deployment of Infrastructure and Services in the Open Grid Services Architecture (OGSA)", Proceedings: Third International Component Deployment Conference, CD 2005, Grenoble, France, Dearle, A, Eisenback, S. (Eds.), LNCS Volume 3798/2005, pp. 181-195. (pre-publication version)
Reports from this work (excluding confidential client reports)
Paul Brebner, Cloud Security and Performance, Asian World Summit, 2nd Annual Security Summit, Kuala Lumpur, May 2012.
Paul Brebner, Jon Gray, Shared Services and Cloud Computing, ACS Canberra Forum, September 2012.
Paul Brebner, Cloud Computing Modelling, Master class, Cloud Computing Forum, February 2012, Canberra.
Paul Brebner, Jon Gray, Migrating to the Cloud, PMI Conference, Canberra, 2011.
Paul Brebner, Masterclass, Data Intensive Cloud Computing, 2nd Annual Digital Information Management Summit, Sydney, 2011
Paul Brebner, Cloud for Government, course for NSW Government Departments, Sydney, July 2010.
Paul Brebner, Modelling Cloud Cost and Performance, A comparison of Amazon EC2, Google App Engine, and Microsoft Azure, NICTA research seminar series, Canberra Research Laboratory, June 2010.
Anna Liu, Paul Brebner, Enterprise Cloud Computing: Understanding Costs, Managing Risks, and Realising Business Value, contracted customised workshop for DIAC, Canberra, June 2010.
Paul Brebner, Green ICT Overview, NICTA research seminar series, Canberra Research Laboratory, May 2009.
Paul Brebner, Liam O’Brien, Managing Performance Risks in Service Oriented Architectures (SOA), Overview (day 1), Practical classes (days 2-3), NICTA short courses, 24-26 March 2009.
Paul Brebner, Jon Gray, Virtualization technology in a SOA environment, ACS Annual Conference, Canberra, March 2009,
Liam O’Brien, Paul Brebner, Modelling and Analysis for Measuring Performance Attributes of SOAs, Workshop, Software & Systems Quality Conference, October, 2008.
Paul Brebner, Is Non Functional Testing becoming more Business Critical?, invited keynote, SQS Software Quality Systems Conference, October 2008.
Brebner, P. Service Semantics. Semantic Technologies for Business and Government. NICTA/AGIMO Seminar, 13 November, 2007. "Service Semantics" Presentation (powerpoint).
Brebner, P., Walker, G., Bai, Q., Robinson, B., Evaluation of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) services and architecture, CIRSO ICT Centre, 2007 July, Technical Report 07/24 (contracted report for the OGC International Web Enablement Testbed).
Brebner, P., Event-based Sensor Web Services. In: Proceedings of the CSIRO ICT Centre Annual Conference, 2006.
Brebner, P., Standards for Web Services: WSDL 2.0, Web Research and Standards SIG, AUSWEB '06.
UK-OGSA Evaluation Project Report 2.0: Evaluating OGSA across organisational boundaries (2004) PDF Document (27pp)
Brebner, P., Oxford University Computing Laboratory Talk: "Grid middleware is easy to install, configure, debug and manage - across multiple sites (One can't believe impossible things)", 2004.
Brebner, P., “UK OGSA Evaluation Project: Initial Findings”, Invited presentation at the: Core e-Science Programme Town Meeting on Grid and Web Services, 23 April 2004, London, UK.
Brebner, P. (Ed.), UK OGSA Evaluation Project, Report 1.0, Evaluation of Globus Toolkit 3.2 (GT 3.2) Installation, UK EPSRC Funded project deliverable, 24 September 2004.
Brebner, P., "Grid middleware is easy to install, configure, debug and manage - across multiple sites (One can't believe impossible things)", Oxford University Computing Laboratory, 15 October 2004.
Brebner, P., "Grid Middleware - Principles, Practice, and Potential", University College London, Computer Science Department Seminar, 1 November 2004.
Program Committee Member and Reviewing from this work
Primary NICTA representative Standard Performance Evaluation Corporation Research Group (SPEC RG) and SPEC Cloud Working Group, 2011-2013. Attended annual face-to-face meetings 2011 and 2012. Observer 2013-2016.
Program Committee Member for: 8th Middleware for Next Generation Internet Computing (MW4NG 2013), Security and Trusted Computer Track of the Sixth International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2012), Middleware for Service Oriented Computing (MW4SOC 2007-MW4SOC 2012, co-located with Middleware 2007-2012), Second International Conference on Networked Digital Technologies (NDT 2010), 1st International Workshop on Security and Performance in Emerging Distributed Architectures (SPEDA2010), First International Workshop on Engineering Mobile Service Oriented Systems (EMSOS2010), SOFtware SEMinar: 35th Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM 2009), Theory and Practice of Software Services track (2009). Middleware For Web Services Workshop (MWS 2008) at 12th IEEE International EDOC Conference on Enterprise Computing (EDOC 2008), Middleware for Service Oriented Computing Workshop (MW4SOC) at ACM/IFIP/USENIX 9th International Middleware Conference (Middleware 2008), organiser for Quality-of-Service Concerns in Service Oriented Architectures Workshop (QOSCSOA08) at 6th International Conference on Service Oriented Computing (ICSOC 2008), IEEE International Workshop on Software Engineering for Adaptive Software Systems (SEASS’08) at IEEE International Conference on Web Services (ICWS 2008), International Workshop on RFID Technology – Concepts, Applications, Challenges (IWRT'08) at 10th International Conference on Enterprise Information Systems (ICEIS2008), 1st International Workshop in RFID technology (2007), International Workshop on Engineering Service-Oriented Applications (WESOA 2006, co-located with ICSOC 2006), Workshop on MOdel Driven Development for Middleware (MODDM) at Middleware 2006, 2nd International Workshop on Engineering Service-Oriented Applications, 2006., Workshop on Model Driven Development for Middleware (2006), Session chair for 3rd International Working Conference on Component Deployment 2005 (CD 2005, co-located with Middleware 2005).
Invited reviewer: Paid reviewer in the area of virtualization and cloud computing, for the French National Research Agency, Equipex, for the “Excellence Infrastructures” call for proposals (1 billion €, 2011), IEEE Transactions of S/W Engineering (2006- 2010). International Journal of Systems and Service-Oriented Engineering (IJSSOE) Special Issue on "Engineering Middleware for Service-Oriented Computing" (2010), Expert Reviewer for proposals in the 4th round of JACQUARD, the Dutch Software Engineering grants program (http://www.jacquard.nl/, 2008)/
· Proposal accepted for a BoF Session on “Virtualization, Cloud Computing and Software Architecture”, attended and chaired discussion, Joint Working IEEE/IFIP Conference on Software Architecture 2009 & European Conference on Software Architecture 2009 (WICSA 2009), 15 September 2009, Cambridge, UK.
35th International Conference on Software Engineering (ICSE 2013) Publicity Team, and 34th International Conference on Software Engineering (ICSE 2012) Software Engineering in Practice Track Committee Member (reviewed 20 papers)
Comments
Post a Comment