[evla-sw-discuss] [Fwd: [ace-announce] Request for feedback]

Bill Sahr bsahr at cv3.cv.nrao.edu
Mon Jan 7 20:38:38 EST 2002


Thought I would post this email to evla-sw-discuss to provide
a little food for thought.  I believe ALMA is using both ACE
and TAO.  TAO is built upon ACE.  Is there a role for ACE in
the EVLA software ?

Bill

-------- Original Message --------
Subject: [ace-announce] Request for feedback
Date: Sat, 05 Jan 2002 10:21:02 -0600
From: "Douglas C. Schmidt" <schmidt at cs.wustl.edu>
To: ace-users at cs.wustl.edu, ace-announce at cs.wustl.edu


Hi Folks,

        Steve and I have been invited to write a short "content piece"
about portability, ACE, and open-source for the InformIT online
magazine <http://www.informit.com>.  Part of the purpose of this piece
is to motivate folks to read C++NPv1 to learn more about these various
topics.  I've enclosed a draft of the material below.  If would be
great if we could get some feedback on ways to improve the article!

Thanks very much,

        Doug

----------------------------------------

Why Standards Alone Won't Get You Portable Software and 
How to Make Them Work for You.

Douglas C. Schmidt and Stephen D. Huston

1. Motivation

The need to write portable software that runs on a variety of
computing platforms becomes more obvious every day.  Leading
mainstream computer vendors, such as IBM, HP, Compaq, and Dell offer a
mix of Windows, Linux, and UNIX operating systems across their
hardware platforms.  Likewise, as people become ever more connected
and mobile, many computer vendors are also supporting embedded and
handheld systems.

As a software professional, it's your job to develop software that
enables your company to gain competitive advantage.  Often, the key
to that advantage lies in creating portable software that runs on
multiple platforms, and versions of platforms.  If you believe the
talk in some software development circles, you might think that de
facto standards, such as Windows, or de jure standards, such as POSIX
and UNIX98, are all you need to make your applications portable across
the growing variety of computing platforms outlined above.
Unfortunately, the old adage that ``the nice thing about standards is
that there are so many to choose from'' is even more applicable today
than it was a decade ago.  There are now dozens of different operating
system (OS) platforms used in commercial, academic, and government
projects including real-time, embedded, and handheld systems; personal
and laptop computers; an assortment of various-sized UNIX or Linux
systems; and ``big iron'' mainframes and even supercomputers.
Moreover, the number of OS permutations grows with each new version
and variant.

In theory, the idea behind standards is sound: if a standard is
implemented by many vendors (or at least one uber vender), code that
adheres to the standard will work on all platforms that implement the
standard.  In practice, however, standards evolve and change, just
like different versions of software.  Moreover, vendors often choose
to implement different standards at different times.  It's therefore
likely that you'll work on multiple computing platforms that implement
different standards in different ways at different times.

Since your customers pay you to solve their business needs, rather
than wrestle continuously with portability details, it's worthwhile to
consider how to ensure that standards work for you, rather than
against you.  To assist you in this quest, this article describes some
of the difficulties you'll likely encounter when relying on
standards--OS standards in particular--for portability.  It then
describes some of the ways that host infrastructure middleware and
open-source software models can help you develop portable networked
applications more quickly and easily.

2. Problem: Programming Yourself into a Corner with OS APIs

An OS can be viewed as a "silicon abstraction layer," which shields
application software from the details of the underlying hardware.  If
just one instance of one version of an OS application programming
interface (API) were adopted universally, our programming tasks would
clearly be simplified.  As noted above, however, that's not the case
today, nor will it ever be due to the need to support legacy
applications.  As a result, the common practice of programming
applications directly to OS APIs yields the following problems:

. It's not portable.  Some standards are implemented on only a subset
  of your target platforms.  For example, three popular threading APIs
  are Windows, UNIX International (UI) threads, and POSIX threads
  (Pthreads).  They all have different features and semantics, and of
  course, different APIs.  Therefore, if your code needs to run on
  Windows and anything else that isn't Windows, you'll need to deal
  with at least two of these three APIs.  To make matters worse, these
  APIs have evolved over time, so code written to an earlier version
  of the API may not compile with later versions.  If the source code
  for OS APIs isn't available, you can't adjust it for any backwards
  compatibility you may need.  Likewise, trying to integrate your own
  versions of vendor-supplied shared libraries would be problematic
  even if you could find a way to do it.

. The differences are tedious to find and work around.  For instance,
  the enormously popular BSD Socket API is used for TCP/IP network
  programming.  It's widely implemented and you can usually count on
  it being available on any platform that supports the TCP/IP
  networking protocols.  However, the integration of the Socket API
  into the operating system's runtime libraries can yield
  functionality that's not portable.  For example, the read() and
  write() system calls on many UNIX systems can be used to receive and
  send socket data, respectively.  They do not work on systems where
  the Socket API is not closely united with other I/O subsystems,
  however, as you've no doubt noticed if you've tried to port
  UNIX-based Socket applications to Windows or many real-time
  operating systems.

. It's error-prone since native OS APIs written in C often lack
  type-safe, reentrant, and extensible system function interfaces and
  function libraries. For example, endpoints of communication in the
  Socket API are identified via weakly typed integer or pointer I/O
  handles.  Weakly typed handles increase the likelihood of subtle
  programming errors that don't surface until run time, which can
  cause serious problems for your customers.

. It encourages inadequate design techniques since many networked
  applications written using OS APIs are based on algorithmic design,
  rather than object-oriented design.  Algorithmic design decomposes
  the structure of an application according to specific functional
  requirements, which are volatile and likely to evolve over time.
  This design paradigm therefore yields non-extensible software
  architectures that can't be customized rapidly to meet changing
  application requirements.

The bottom line is that it has become prohibitively expensive and time
consuming to program yourself into a corner by developing applications
entirely from scratch using nonportable native OS APIs and algorithmic
design techniques.  In this age of economic upheaval, deregulation,
and stiff global competition, few companies can afford to bear these
expenses, particularly when there's a better way!

3. An Appealing Solution: Host Infrastructure Middleware

An increasingly popular solution to the problems described above is to
interpose host infrastructure middleware between OS APIs and
application software.  Host infrastructure middleware provides an "OS
abstraction layer" that shields application software from the details
of the underlying OS.  Widely used examples of host infrastructure
middleware include the following.

. The Sun Java Virtual Machine (JVM), which provides a
  platform-independent way of executing code by abstracting the
  differences between operating systems and CPU architectures.  A JVM
  is responsible for interpreting Java bytecode and for translating
  the bytecode into an action or operating system call.  It's the
  JVM's responsibility to encapsulate platform details within the
  portable bytecode interface, so that applications are shielded from
  disparate operating systems and CPU architectures on which Java
  software runs.

. The Microsoft Common Language Runtime (CLR), which is the host
  infrastructure middleware foundation upon which Microsoft's .NET web
  services are built.  The Microsoft CLR is similar to Sun's JVM.  For
  example, it provides an execution environment that manages running
  code and simplifies software development via automatic memory
  management mechanisms, cross-language integration, interoperability
  with existing code and systems, simplified deployment, and a
  security system.

. The ADAPTIVE Communication Environment (ACE), which is a freely
  available, open-source, highly portable toolkit written in C++ that
  shields applications from differences between native OS programming
  capabilities, such as connection establishment, event
  demultiplexing, interprocess communication, (de)marshaling,
  concurrency, and synchronization.  At the core of ACE is its OS
  adaptation layer and C++ wrapper facades that encapsulate OS file
  system, concurrency, and network programming mechanisms.  The higher
  layers of ACE build upon this foundation to provide reusable
  frameworks that handle network programming tasks, such as
  synchronous and asynchronous event handling, service configuration
  and initialization, concurrency control, connection management, and
  hierarchical service integration.

The primary differences between ACE, JVMs, and the .NET CLR are that
(1) ACE is always a compiled C++ interface, rather than an interpreted
bytecode interface, which removes a level of indirection and helps to
optimize runtime performance, (2) ACE is open-source, so it's possible
to subset it or modify it to meet a wide variety of needs, and (3) ACE
runs on more OS and hardware platforms than JVMs and CLR, including
  . PCs, for example, Windows (all 32/64-bit versions), WinCE;
    Redhat, Debian, and SuSE Linux; and Macintosh OS X.
  . Most versions of UNIX, for example, SunOS 4.x and Solaris,
    SGI IRIX, HP-UX, Digital UNIX (Compaq Tru64), AIX, DG/UX,
    SCO OpenServer, UnixWare, NetBSD, and FreeBSD.
  . Real-time operating systems, for example, VxWorks, OS/9,
    Chorus, LynxOS, Pharlap TNT, QNX Neutrino and RTP, RTEMS, and
    pSoS.
  . Large enterprise systems, for example, OpenVMS, MVS OpenEdition, 
    Tandem NonStop-UX, and Cray UNICOS.

4. Improving Application Portability with ACE

ACE contains ~250,000 lines of C++ code, ~500 classes, and ~10
frameworks [ACE].  To provide its powerful capabilities across a
diverse range of platforms, ACE is designed using a layered
architecture.  This design allows networked application developers a
wide variety of usage options to match their needs.  It also makes
reuse much simpler within ACE itself.  The layers in ACE that enable
this power are described below.

. ACE's OS adaptation layer.  This layer does most of the work to
  unify the diverse set of supported platforms and standards under a
  common API.  In most cases, the OS adaptation layer presents a
  POSIX-like interface.  It's possible to write many networked
  applications portably using only ACE's OS adaptation layer.  Since
  this layer presents a flat C-like function API, however, using it
  directly incurs the drawbacks of algorithmic design.  Moreover, this
  layer's purpose is to unify the means for implementing well-defined
  behavior, such as opening a file, writing data to a socket, or
  spawning a thread.  Behaviors that are not common across platforms,
  such as forking a process (not available on Windows, for example)
  are not implemented at this layer.  For these reasons, many
  networked applications use ACE's wrapper facade layer, which is
  described next.

. ACE's wrapper facade layer.  This layer provides an object-oriented
  form of systems programming for networked applications.  Its classes
  reify the Wrapper Facade pattern, where one or more classes enforce
  proper usage by encapsulating functions and data within a type-safe
  OO interface, rather than a flat C function API.  For example, there
  are separate wrapper facade classes for passively listening for TCP
  connections, actively connecting a TCP connection, and the resulting
  active TCP streams.  This layer also contains portable capabilities
  that don't share portable implementation details, such as process
  management.

  The ACE wrapper facade layer resolves many of the portability issues
  described in Section 2.  The vast majority of portable functionality
  is available with the same classes across all supported platforms.
  Since ACE's porting efforts are responsible for resolving any OS
  differences, you needn't worry.  With all of the details buried in
  ACE, your application code remains clean and neat on all platforms.
  Since ACE offers all functionality via intelligently-designed class
  interfaces, your code doesn't suffer from either hard-to-find
  semantic differences across platforms or type safety problems that
  can otherwise plague you at run time.  The result is that your code
  is both developed quicker, and the porting effort is much shorter
  than it would otherwise be.

  As we've seen, ACE's OS adaptation and wrapper facade layers offer
  enormous benefit over trying to code directly to native OS APIs.
  However, ACE offers many more benefit than object-oriented
  portability.  These benefits are derived from the highest level of
  abstraction in ACE: the framework layer described next.

. ACE framework layer.  A framework is a set of classes that
  collaborates to provide a reusable architecture for a family of
  related applications.  The ACE framework layer codifies the
  interaction of the ACE wrapper facades to offer capabilities such as
  event demultiplexing and dispatching, asynchronous I/O handling, and
  event logging.  Although implementing these frameworks efficiently
  often necessitates the use of platform-specific details, ACE offers
  them as portably as possible via its systematic use of patterns
  [POSA2].
  
  ACE's frameworks also offer a set of "semi-complete" applications,
  such as the Acceptor-Connector framework that simplifies the active
  and passive establishment of network service sessions, and the
  Reactor framework for handling and dispatching events from multiple
  sources.  Rather than recreate these core areas of necessary
  functionality, developing new applications using these frameworks
  requires only the addition of business-specific logic in
  well-defined locations to yield working solutions.  In addition to
  developing networked applications, ACE's frameworks have been used
  to develop even higher levels of standards-based middleware, such as
  the JAWS web server and The ACE ORB (TAO) [TAO].

Together, the ACE middleware layers described above simplify the
creation, composition, configuration, and porting of networked
applications without incurring significant performance overhead.

5. The Importance of Open-Source

Open-source development processes have emerged as an effective
approach to reduce cycle time and decrease design, implementation, and
quality assurance costs for certain types of software, particularly
systems infrastructure software, such as operating systems, compilers
and language processing tools, editors, and middleware such as ACE,
JAWS, and TAO.  This section describes the reasons why successful
open-source development projects work, from both an end user and
software process perspective.  We base this discussion on decades of
experience devising, employing, and researching open-source
development processes and middleware toolkits.

>From an end user perspective, successful open-source projects work for
the following reasons.

. Reduced software acquisition costs.  Open-source software is often
  distributed without development or runtime license fees, though many
  open-source companies do charge for technical support.  This pricing
  model is particularly attractive to application developers working
  in highly commoditized markets where profits are driven to marginal
  cost.  Moreover, open-source projects typically use low-cost,
  widely-accessible distribution channels, such as the Internet, so
  that users can access source code, examples, regression tests, and
  development information cheaply and rapidly.

. Enhanced diversity and scale.  Well-written and well-configured
  open-source software can be ported easily to a variety of
  heterogeneous operating system and compiler platforms.  In addition,
  since the source is available, end users have the freedom to modify
  and adapt their source base readily to fix bugs quickly or to
  respond to new market opportunities with greater agility.  Indeed,
  many of the ACE ports originated in the ACE user community rather
  than its core development group.  Due to ACE's open-source model,
  therefore, its range of platforms expanded rapidly.

. Simplified collaboration.  Open-source promotes the sharing of
  programs and ideas among members of technology communities that have
  similar needs, but who also may have diverse technology
  acquisition/funding strategies.  This cross fertilization can lead
  to new insights and breakthroughs that would not have occurred as
  readily without these collaborations.  For example, due to input and
  code contributions from the ACE community, ACE's logging service
  component has evolved from a self-contained client/server
  arrangement into one that can take advantage of both UNIX syslog and
  the Windows Event Log for better enterprise integration.

>From a software process perspective, successful open-source projects
work for the following reasons: 

. Scalable division of labor.  Open-source projects work by exploiting
  a loophole in "Brooks Law" that states "adding developers to a late
  project makes it later."  The logic underlying this law is that
  software development productivity generally doesn't scale up as the
  number of developers increases.  The culprit is the rapid increase
  in human communication and coordination costs as project size grows.
  A team of ~10 good developers can therefore often produce much
  higher quality software with less effort and expense than a team of
  ~1,000 developers.

  In contrast, software debugging and QA productivity does scale up as
  the number of developers helping to debug the software increases.
  The main reason for this is that all other things being equal,
  having more people test the code will identify the ``error-legs''
  much more quickly than having fewer testers.  A team of 1,000
  testers will therefore usually find many more bugs than a team of 10
  testers.  QA activities also scale better since they don't require
  as much inter-personal communication compared with software
  development activities, particularly analysis and design activities.

. Short feedback loops. One reason for the success of well-organized
  open-source development efforts, such as Linux or ACE, is the short
  feedback loops between the core developers and the users.  In
  successful open-source projects, for instance, it is often only a
  matter of minutes or hours from the point at which a bug is reported
  from the periphery to the point at which an official patch is
  supplied from the core to fix it.  Moreover, the use of powerful
  Internet-enabled configuration management tools, such as the GNU
  Concurrent Versioning System (CVS), allows open-source users to
  synchronize in real time with updates supplied by the core
  developers.

. Effective leverage of user community expertise and computing
  resources.  In today's time-to-market-driven economy, fewer software
  providers can afford long QA cycles.  As a result, nearly everyone
  who uses a computer--particularly software application
  developers--is a beta tester of software that was shipped before all
  its defects were removed.  In traditional closed-source/binary-only
  software deployment models, premature release cycles yield
  frustrated users, who have little recourse when problems arise with
  software they purchased from vendors and thus little incentive to
  help improve closed-source products.  In contrast, open-source
  development processes help to leverage expertise in their
  communities, thereby allowing core developers and users to
  collaborate to improve software quality.

  For example, the short feedback loops mentioned in the previous
  bullet encourage users to help with the QA process since they are
  "rewarded" by rapid fixes after bugs are identified.  Moreover,
  since the source code is open for inspection, when users do
  encounter bugs they can often either fix them directly or can
  provide concise test cases that allow the core developers to isolate
  problems quickly.  User efforts therefore greatly magnify the
  debugging and computing resources available to an open-source
  project, which can improve software quality if harnessed
  effectively.

. Inverted stratification.  In many organizations, testers are
  perceived to have less status than software developers.  The
  open-source development model inverts this stratification in many
  cases so that the "testers" in the periphery are often excellent
  software application developers who use their considerable debugging
  skills when they encounter occasional problems with the open-source
  software base.  The open-source model makes it possible to leverage
  the talents of these gifted developers, who often would not be
  satisfied with playing the role of a tester in traditional software
  organizations.

In general, traditional closed-source software development and QA
processes rarely achieve the benefits outlined above as rapidly or as
cheaply as open-source processes.

6. Concluding Remarks

Today's business climate requires software developers to move
applications nimbly among and between a wide variety of hardware and
software platforms.  The standards developed by both vendors and
industry groups as a means to ease the burden of porting software
often present a myriad of conflicting and divergent facilities and
APIs, making it unnecessarily hard to develop portable software.
Today's developers cannot afford to be tied up in such a mess.

Host infrastructure middleware is an emerging set of technologies that
help alleviate many of the portability challenges with OS API in order
to develop efficient, yet reusable and retargetable, networked
applications quicker and easier.  The ACE toolkit provides a layered
set of classes and frameworks based on patterns that alleviate the
problems of developing portable software across a wide range of
platforms.  

ACE has changed the way complex networked applications and middleware
are being designed and implemented on the world's most popular
operating systems, such as AIX, HP-UX, Linux, MacOS X, Solaris, and
Windows, as well as real-time and embedded operating systems, such as
ChorusOS, LynxOS, pSoS, QNX, VxWorks, and WinCE.  ACE is being used by
thousands of development teams, ranging from large Fortune 500
companies to small startups.  Its open-source development model and
self-supporting user community culture is similiar in spirit and
enthusiasm to that driving Linus Torvalds's popular Linux OS.

ACE can be downloaded from http://www.cs.wustl.edu/~schmidt/ACE.html.

References

[ACE] D. Schmidt and S. Huston, C++ Network Programming: Mastering
  Complexity with ACE and Patterns, Addison-Wesley, 2002.
  http://www.cs.wustl.edu/~schmidt/ACE/

[POSA2] D. Schmidt, M. Stal, H. Rohnert, and F. Buschmann,
  Pattern-Oriented Software Architecture: Patterns for Concurrent and
  Networked Objects, Wiley and Sons, 2000.
  http://www.cs.wustl.edu/~schmidt/POSA/

[TAO] D. Schmidt, D. Levine, S. Mungee, "The Design and Performance of
  the TAO Real-Time Object Request Broker", Computer Communications
  Special Issue on Building Quality of Service into Distributed
  Systems, 21(4), 1998.  http://www.cs.wustl.edu/~schmidt/TAO.html



More information about the evla-sw-discuss mailing list