CONUS and OCONUS
posted on
Aug 18, 2008 06:48AM
http://64.233.167.104/search?q=cache...
CONUS). CONUS or forward operating base personnel can initiate video streams and live
♦ Fusion of data from large numbers of sensors
♦ Large-scale target identification and tracking
♦ Large-scale video acquisition, transmission, analysis, and directing of this
information to appropriate command and control entities
♦ Remote command and control of robotic surface and UAV resources
♦ Rapid insertion of overwhelming force
A key component of the mission is the transmission of critical, sensitive information over
reliable, secure networks. The networks need to be rapidly deployable and configurable to
support command and control as well as tactical operations.
2.1.2 Zero-Casualty War Networking Research Needs
The discussion of this scenario identified the need for research in:
Scalable networking for large numbers of low-data-rate nodes
Future combat systems will have thousands to millions of nodes with very low
throughput, high delays, and high redundancy. Networking needs to support scalability to
large numbers of nodes with very low data rates by allowing redundant nodes, highly sub-
optimal routes, high tolerance to losses and errors, and adaptation to changes. Research is
needed on programming techniques for large-scale redundancy-based computing paradigms.
Network self-organization, automated configuration, reconfiguration, and management
Dynamic configuration and reconfiguration of networks are needed to support rapid initial
deployment of sensors and their networks, changing conditions and locations, and mobile
elements with asset tracking and handoffs. To support these capabilities a wide range of
information is needed such as topography, sensor location, and user requirements. It also
requires network capabilities such as:
♦ Tool sets for network design and deployment
♦ Performance measurement throughout the network
♦ Network discovery of applications and their requirements
7
♦ Self-diagnosing, self-healing capacity
Research is needed to automatically generate, propagate, and maintain the optimal
communications, network, and application configurations required to rapidly establish and
maintain mobile ad hoc tactical networks. To support crisis or conflict situations, network
resources should be deployable and configurable in an operational state in the time required
to physically transport those resources to their destination. These situations also would
benefit from an ability to establish virtual configurations of networking assets that may
include mobile field nodes and fixed end user sites. This capability must support decisions
about frequency assignment (optimizing spatial reuse), application location, and network
addressing.
Self-organizing networks have the potential to reduce the large manpower requirements to
set up and configure networks. Research is needed to reduce the number of networking
infrastructure personnel required to establish, operate, and maintain networks from 20
percent of a rapid insertion force to no more than 1 percent of the force. This will require
networking capabilities such as:
♦ Automated diagnosis and fault isolation of mobile ad hoc networks
♦ Non-destructive automated network reconfiguration mechanisms to maintain system
integrity and performance
♦ Mechanisms for network evolution, including interface definition and standardization
♦ Automated mechanisms for the diagnosis and correction of problems in mobile ad
hoc networks
Hierarchical networking
CONUS, forward operating bases, Task Force commanders, and FFCS cell team leaders
have access to common views of the tactical situation, but typically need different
networking and aggregation to operate at different levels of the hierarchy.
Seamless, transparent service across heterogeneous elements
In a dynamic ad-hoc environment, networking will rely on heterogeneous technologies
(wireless, satellite, land line) that must seamlessly and transparently work together to support
the end users. Network-to-network interfaces must be interoperable. Standards are needed
to support seamless and transparent service.
End-to-end performance
Applications, networking, and services must cooperate to satisfy the end-to-end needs of
the user in a seamless, transparent, cost-effective, trustworthy, and timely manner. The
network must be able to adapt to mobile and ad-hoc sensors and nodes, accommodate in-situ
sensors and nodes, and provide access to widely distributed computational resources (for
example processing, modeling, and data resources). A knowledge-based, rule-driven tool is
8
needed to tailor sensor performance to specific mission requirements and to tune the sensor
array for deployment patterns, transmission frequencies, and power levels.
The networks need to support fusion of sensor data and to provide information tailored to
end users requiring information at different levels of granularity – e.g., data covering a corps
level or a battalion level. Sensor data may be aggregated in the field to minimize data
transmission if the results will still meet end user requirements.
End-to-end performance measurement is critical to tuning end-to-end performance and
trust.
Power management
Mobile networks will rely on finite power sources, usually batteries. It is critical that
mobile network elements accurately measure and effectively use the power needed for
sensing, processing, transmitting, and receiving information.
Trust: security, assurance, and reliability
Functions such as telemedicine, weapon’s fire control, voice, and image transfers require
high levels of end user trust. This trust will depend on end-to-end system reliability, security,
responsiveness, and predictable performance. System responsiveness relies on channel access
methods and end-to-end route generation. Predictable performance will require system and
network redundancy and fault tolerance. Information must meet user and application
requirements for trust by providing throughput, timeliness, fidelity, assurance, reliability,
latency, location (e.g., for weapon’s fire control), error, and other factors. QoS may address
some of these factors.
Security must be provided throughout the system since each sensor and network node is
subject to compromise. Differing levels of end user trust may accrue to different network
paths, data aggregation from different sensors, cross-correlation of sensor data, and other
system characteristics. Research is needed on decreasing information uncertainty through
configuration and management of sensor and networking resources.
End user trust is dependent on establishing a common operational view and QoS. To
achieve this, research is needed on:
♦ Techniques for capturing minimum mission requirements
♦ Adaptive middleware to map application-level requirements into network-level QoS
mechanisms
♦ Network-level mechanisms (QoS and techniques) for resolving conflicting needs
Multimedia
Multimedia technologies will accommodate voice, data, video, and still images.
9
Revisiting networking fundamentals
Future systems will have to seamlessly integrate components with a dynamic range many
orders of magnitude larger than today’s networks (with speeds commonly ranging from
hundreds of gigabits per second to a few kilobits per second) in a changing environment.
Research is needed to revisit the network protocol stack to determine what types of control
information are needed at each layer (including the application layer) to allow the other
layers to effectively adapt to rapidly changing network conditions. Intermediate steps in this
research include identifying characteristics of the potential links, interfaces, and component
networks, and developing a control plane application/platform interface.
10
2.2 Deeply Networked World/SWARMS (Smart World Airforce Repair and
Maintenance System)
In the future, networking and networked devices will be broadly and deeply deployed to
make possible the truly smart world in which intelligent agents query, collate, and manage
systems. An agent model will be developed and deployed, in which agents function correctly
as individuals and collaborate with each other effectively in order to make higher-level
decisions than any individual agent might make. Both individually and in groups, agents will
provide higher-level, composite functions responsive to societal policy constraints that may
change or evolve over time. Humans will not be responsible for managing the large numbers
and heterogeneity of devices in such a deeply networked world. The network will be self-
organizing and self-healing. This will require the ability to measure and evaluate its behavior,
and either mask or correct problems when they arise.
2.2.1 Scenario Description
In the future, the Airforce has implemented an architecture called the Smart World
Airforce Repair and Maintenance System, or SWARMS, where every repair component and
parts depot is “smart.” SWARMS predicts when and where specific repairs will be needed
and, at the level of the whole aircraft, understands flight schedules and uses this information
to plan where and when work should be done. SWARMS informs the global inventory
system, which in turn makes sure that by the time a plane arrives at a destination the
appropriate parts are there, with enough information that the repair can be made.
In SWARMS, a part knows where it is. A depot knows how many of which kinds of parts
it has and has a model of what is needed based on reports about the schedule of arriving
planes. Every part, when installed in an aircraft, is also introspective. Each one knows how
well it is functioning and can predict when it will need to be repaired or replaced. The
composite systems not only integrate over all their parts, but also have a higher level
understanding of the emergent system to support mission planning.
In this scenario, there are critical issues relating to trust, assurance, and security, including
privacy, authenticity, authorization, and denial of service. The system must maintain security
to prevent exploitation by the enemy. For example, SWARMS data are of great use for
espionage and sabotage by the opposing force, and maintenance schedules and procedures
are critical to the safety of the aircraft.
2.2.2 SWARMS Networking and Networking Research Needs
The SWARMS discussion identified key networking elements of the scenario, including:
♦ Smart devices that monitor their location, query function and status, and identify
situations requiring attention
♦ Sensors and systems considered at different levels of aggregation – e.g., as individual
devices or as components of larger systems
♦ Automated functions implemented to evaluate conditions and to control behavior
11
♦ Multiple simultaneous agents acting collectively
The SWARMS discussion identified the research needed to support these elements,
including:
Trustworthiness of complex self-organized networks
The end user must know that the end-to-end system can be trusted to meet requirements.
Characteristics of a complex system contributing to end user trust in the system include
reliability, robustness, and security. Technical capabilities for implementing these
characteristics include digital signatures, authentication, authorization, path quality,
information source, and quality of the information. The trustworthiness of a complex system
is a function of the trustworthiness of its components and how they are integrated. It may
change as sensors, networks, and other resources change over time. Since some network
paths may be more trustworthy than others, the algorithms chosen to organize, select, and
establish network paths contribute to the trustworthiness of the system. Research is needed
to develop these algorithms.
Sensor data may have different “value” to an end user depending on the trust associated
with the specific sensors that produced the data. Distributed sensor design characteristics,
such as reliability and communications mechanisms, contribute to the end user trust in a
sensor and its data. These characteristics are a consideration in the design and cost of
producing the sensors. Trust will also be affected by the sensors chosen and the data paths
used to obtain data from these sensors.
Adaptive distributed systems
Adaptivity may enable a greater functional range for a distributed system. In a
bandwidth- and sensor-limited environment, the system can adapt the sensors chosen and the
data they transmit to produce information tailored to specific levels of the decision and
operational hierarchy. Several alternatives for adaptation exist:
♦ For simple network, the application may adapt
♦ For a simple smart network, the applications may adapt based on network-provided
information, including initial information and operational feedback
♦ For a complex highly controllable network, the network may adapt based on
information provided by applications
♦ For a complex implicitly adaptive network, the applications run and the network
adapts
Adaptive networking depends on performance measurement and evaluation. Tools are
needed for development, implementation, evaluation, and use of adaptation algorithms.
12
Scalability and self-organizing communications algorithms
We expect orders of magnitude increases in both the number of networked devices and
network traffic on the future Internet. It is critical that the network scale to accommodate
those increases. Research is needed to understand network behavior with these increases and
to study networked systems’ complexity. Research is needed to adapt relevant science from
other fields such as chaos theory, economics, catastrophe theory, stochastic processes, and
generalized control theory to promote breakthroughs and revolutionary solutions to
scalability. Network performance measurement is critical to providing information on the
functioning of the network to guide real-time network management, and to provide an
understanding of network behavior to support design of the future Internet. Network
performance measurement tools need to be developed, standardized, and ubiquitously
deployed to provide performance data. A performance data archive is needed to provide an
historical record for understanding operational network behavior, complexity, and trends and
to support network simulation and design.
Currently, network scalability is implemented using hierarchical and cluster network
organization. However, such organization is difficult to implement for mobile network
elements, for responding to dynamic conditions, and for responding to administrative
constraints. These require that self-organizing networks be able to continually change the
network organization.
Research is needed to identify core networking functions and parameters, to develop
algorithms that will enable highly flexible multimodal routing to support scaling and QoS,
and to implement more flexible addressing schemes to accommodate emerging optical
technologies. Routers need broader semantics for topology, name, attributes, and coordinates
(grid location, hierarchy, etc.), and scaling for orders of magnitude expansion in numbers of
networked devices and network traffic. Network simulation tools are needed to determine
performance limits of a network to anticipate problems before they occur.
13
2.3 Crisis Management
A crisis management system has been developed that enables crisis response teams to
respond with critical resources where needed in real time. It uses deployable and in-situ
sensors that configure themselves to provide real-time monitoring of the environment.
Networking supports reliable, dynamically configurable, and highly secure communications
to enable real-time delivery of information to distributed decision-makers and real-time
information development using remote, on-line resources.
2.3.1 Scenario Description
In 2015, “perfect” conditions exist for multiple fires in the U.S. DoD has deployed a
"staring" missile launch detection satellite system approved for military and civilian use.
NASA has orbited Firesat, capable of providing twice-a-day high-resolution data over
multiple spectral bands (for higher informational content), and multi-instrument views of
forest fire activity around the world.
A large collection of wireless devices with embedded chips has been deployed in cellular
phones. They support “spotcasting,” ad hoc communications, a sensor mode, and general
purpose programming and processing capabilities. Optical fiber is extensively deployed at
the core of the network. A rich assortment of sensors is deployed in homes, commercial
buildings, public infrastructure, and the natural environment. Computation, communications,
and sensor resources are widely available over the Internet.
On the day of crisis, DoD reports dozens of new fires from a single highly charged
lightning storm. By mid-day, the early morning Firesat images and data have been processed
and disseminated to hundreds of state and Federal agencies. Hot-spot data are combined with
vegetation cover and dryness models to produce detailed next-24-hour maps for the worst-hit
regions. These maps and digital models are disseminated instantly to government command
and control centers and are spotcast to the individual homes and businesses most in danger.
Department of Interior and other Federal agency supercomputing systems are organized
on-line to model the existing situation and begin “nowcasting” the predicted tracks of the
worst fires. The models and nowcasts are transmitted by satellite communications to the
forest fire field units, which return validation and update information. This field information,
along with real-time atmospheric, chemical, and other environmental data from sensors
deployed throughout the area – both in-situ microsensor platforms deployed in advance as
well as self-contained, self-powered sensors dropped from aircraft that same morning – are
continuously integrated into the nowcast models. Customized warning and evacuation
messages are automatically provided to all the homes and businesses in the area.
Emergency mobilization forces are directed by computer into the affected area. They
establish a high performance field network instantly capable of local area and remote
communications, using truck-based wireless technologies tied into regional networks via
high performance satellite communications.
14
Telemedical facilities are established to attend to fire victims in the worst-hit areas.
Mobile whole-body scanners, sophisticated medical instruments, and mini chemical analysis
labs are plugged into the network. This allows deep resources of medical specialists, data and
information resources, and analysis facilities to support on-site paramedics in real time.
Command and control units have real-time high performance network access to all needed
statewide and Federal resources.
2.3.2 Disaster Scenario Networking and Networking Research Needs
The disaster scenario discussion identified networking challenges including:
♦ Sensornet: An ad hoc network of sensors configured for and attached to the existing
infrastructure. High bandwidth connections, e.g. gigabyte satellite to reach rural
areas.
♦ Heterogeneous environment of sensors, networking capabilities, and administrative
structures
♦ Dynamic environments and changing user requirements providing a need for new
network management and visualization tools and automatic reconfiguration,
management, and control
♦ Technology reuse: Using surviving resources for purposes other than the primary
purpose they were designed for
♦ Data resources: Satellite sensors and deployed video sensors that produce data at the
rate of hundreds of megabytes per second. These data are used in modeling and by
command centers. Rapidly changing loads place emphasis on QoS based on media
type (sensor data, voice, video) and user.
♦ Real-time modeling: Significant distributed computational and communications
resources to support nowcasting
The disaster scenario discussion identified research needed to meet these challenges,
including:
Interoperability
Organized sensors and networks will have to operate seamlessly with the existing
infrastructure and with each other to overcome existing incompatibilities, routing
mismatches, and security mismatches between different providers.
Robustness and dynamic reconfiguration
The infrastructure must be designed to cope with a wide variety of faults and dynamically
changing resources by providing redundant resources and paths and the ability to actively
reconfigure. Redundant technologies should be used so that their failure modes are as
distinct as possible to decrease the probability of system failure.
15
Reuse of technologies
Reuse of wireless devices (including routing, spotcasting, ad hoc communication,
sensing, and application software download) could help ensure that local resources are
available during a disaster response. Reuse could also support functions needed to transform
from short-term crisis management to longer-term emergency response.
Self-organizing, self-healing networks
Self-organizing, self-healing networks will expedite the organization of remaining and
newly deployed sensors and technologies to establish routes and to connect to the existing
infrastructure with minimal human intervention. The involvement and coordination of
government agencies, companies, and individuals may require establishing a temporary
administrative domain including components from the different organizations.
Dynamic, adaptive, time-varying QoS
In a crisis response, bandwidth resources may not match the workload and workloads may
vary significantly over time and space. For example, time criticality and video quality
requirements may vary depending on whether it supports telemedicine or media reporting.
Thus, mechanisms are needed to deliver QoS within an ad hoc network that are appropriate
to the application and network technology.
Discovering resources and their location
Establishing an ad hoc infrastructure for disaster response requires resource discovery
such as identifying and locating available links and their capacities; information,
computational, and other resources; and QoS capabilities to support priority information
distribution and delivery of telemedical resources.
Trust: security, privacy, and reliability
Issues of trust, encompassing security, privacy, and reliability, pervade the disaster
scenario. The disaster response resources must provide differing levels of security,
assurance, and reliability based on the needs of the end users and their applications such as
medical data transmission and patient privacy over heterogeneous, ad hoc networks and
devices. Research needs to address:
♦ Heterogeneity of parties involved: A major disaster will involve many government
agencies (local, state, and Federal), companies, and individuals. Disaster response
networks must be responsive to their diverse security and trust policies that may
contain incompatibilities and hinder sharing data and other resources. This issue can
be further complicated if other sovereign nations are involved.
♦ Flexibility: Disaster responses may require temporary flexible modification or
violation of security and trust policies. For example, an emergency medical team may
need to access patient records for which it ordinarily would not have authorization.
16
♦ Reuse of technologies: Technologies may be designed so they can perform actions in
crises that are not their primary functions. They also need to be designed so they are
not then susceptible to third party invasion during normal times of operation using
their crisis response capabilities.
♦ False alarms: Research should be conducted on detecting a false alarm by an intruder
and being able to identify that intruder.
Network visualization and network management
Current network visualization and management tools are not able to handle the ad hoc
heterogeneous networks needed for disaster response. New network monitoring and
measurement tools are needed to support visualization and management.
Spectrum conflicts
Spectrum conflicts that arise whenever different technologies (for example, Medium
Access Control (MAC) protocols and cellular standards) share the same portion of the
spectrum will need to be overcome.
Metrics and performance
Metrics are needed to measure the time to set up a network and the amounts of traffic
supported at different levels of QoS. Simulation and analysis tools are needed to deal with
time dependent response problems and networks with many orders of magnitude difference
in speeds from one part of the network to another. Solutions to the time dependent response
problems should be evaluated in multiple ways including simulation using benchmarks. In
addition, training exercises are needed to stress and test different solutions.
17
2.4 Collaboration
Our society increasingly relies on geographically distributed collaborations for human
interactions in business, science, the arts, and other areas, both nationally and internationally.
These collaborations improve communication among individuals with a common purpose;
promote sharing, development, and dissemination of information; and foster interdisciplinary
interactions.
The Internet supports distributed collaboration teams in which collaborators at multiple
sites can interact visually and verbally, augmented by additional tools and services such as
virtual reality and immersive environments. Distributed collaborations increasingly require
realistic, “natural” interactions supported not only by high bandwidth but also by a wide
array of enhancing services to provide ease of use, completeness of information, and
appropriate levels of trust and assurance. Collaboration tools assist these groups in
performing complex tasks, such as providing multimodal access to remote sites of scientific
interest and supporting coordination to overcome problems and failures as they occur.
2.4.1 Collaboration Scenario
Current-generation network collaboration tools have not been widely used because they
support limited exchange of information, provide limited “visibility” of remote collaborators,
and are often difficult to set up and maintain. To be more widely accepted and used,
collaboration environments (collaboratories) will need to provide automated setup and
enhanced exchange of information and visualization. In addition they will need to be active,
adapt to individual work patterns, and provide interoperable services over heterogeneous
applications. Collaboration services, provided over a network, will need to adapt to end user
personal preferences and work patterns, provide interoperable services over heterogeneous
applications, and provide multimedia interactions, whiteboarding, and access to data and
computational resources.
It is envisioned that future collaboration environments will make remote collaboration
natural by being very easy to use, proactive, and engaging. They proactively will offload
tasks from collaborators to enhance any given collaboration. For example, an automated
“assistant” will retrieve information relevant to the collaboration. The system will configure
the network and network services to support the particular individuals in the collaboration.
Future collaboration environments will need to deal with unpredictable and emergent
changes, meet hard real-time constraints, and handle asynchronous events as they occur.
Scientific collaborations will be particularly effective if scientific experiments can be
conducted from local laboratories with all phases of the experiment coordinated so that it
appears to be a local endeavor to each participant. The right level of coordination must be
selected dynamically, depending on the task at hand and the feedback from the participants.
Each participant should be able to augment the physical world with virtual worlds to consider
“what-if” scenarios.
18
A collaborative scientific environment might include a large number of small sensors and
robots with varying capabilities, capable of being embedded into the natural environment
with minimum disturbance. Low-power nodes, with limited communication bandwidth, need
to understand local conditions and together collaborate to identify and monitor global
environmental conditions. Network traffic loads may be reduced if data and information can
be aggregated and correlated at a local site to the level of granularity required by the
collaborators.
2.4.2 Collaboration Scenario Networking Needs That Require Networking Research
To support collaboration environments in the future, networks will have to provide:
New middleware services
♦ Transparency among the collaborators, by accommodating heterogeneous
technologies and interfaces and asynchronous events among end users
♦ User trust including security and reliability
♦ A virtual whiteboard
♦ The ability to convey body language (e.g., eye contact), visual cues, haptic, and
olfactory information
♦ Ability to accommodate cultural differences among collaborators
♦ An automated “scribe” to record the collaboration including intrinsic information
such as voice inflections and body language
Ubiquitous access
End users will have a wide range of technologies for accessing the network, depending
on the technologies available to them (such as wireless and optical access) where they are
located. The system must automatically implement the system interface for the different
access technologies.
Intelligent collaborations
The collaboration system must be able to support the varying capabilities of different
collaborators’ systems. This will require automated mechanisms to:
♦ Correctly identify every collaborator
♦ Retrieve collaborator preferences and permissions
♦ Detect each collaborator’s network speed, system protocols, and system capabilities –
e.g., different modes of operation such as a Personal Digital Assistant (PDA) versus a
dedicated multimedia facility
♦ Configure the system to support the above capabilities
This system will need to meet the needs of widely varying collaborative groups from
small to large, across diverse disciplines, and operating in differing environments that could
19
range from a laboratory supported by an array of technologies and high bandwidth
connectivity to a remote field site with more limited technologies and wireless connectivity.
In addition, the system needs to be able to extract real-time data from the scientific
instruments, computing and data resources, and human collaboration as it happens. It also
needs to extract contextual data (e.g. voice pitch and intensity) and bring additional relevant
information into the collaboration for use, reference, and/or citation. Using advanced pattern
recognition techniques and artificial intelligence agents to mine this complementary
information as the collaboration happens will help enhance the collaboration. These
capabilities contribute to ease of use, thereby encouraging system adoption, and they provide
additional information on the collaboration for establishing an historic record.
Virtual environments and coupling issues
A scientific collaboratory needs to support scientists performing both virtual simulations
and physical experiments. It should provide seamless support for interaction with science
discipline models, virtual reality environments, and on-line databases. It must also address
issues of coupling that can take several different forms. For example, two large-scale
simulations being used by different groups of collaborators may operate in the same physical
space but use different physical units such as meters versus millimeters. Other collaborations
may use multiple virtual environments or virtual environments interfacing with physical
experiments. Multiple databases accessed by different groups of collaborators may need to
present data in different physical units or differing formats are a third example.
Multiple modality expert consultation
Collaborations need to be able to include ad hoc consultation with experts wherever they
are located using interaction technologies ranging from PDAs to sophisticated collaboratory
environments.
Network measurement
Instrumentation for performance measurement should be provided throughout the network
to measure network performance and enable isolation of system faults. It needs to be
implemented for multiple link types (e.g., optical, electronic, wireless) and measure end to
end performance. Network measurements should be standardized across network providers
to provide this end-to-end capability. Performance measurement should also support
network reconfiguration for active networking.
Automated supervisory oversight
The system must support supervisory oversight for monitoring performance and
identifying problems. It must ensure that standards are adhered to when carrying out
experiments that have safety and/or environmental implications and/or when performing
experiment-critical functions. The system needs to be able to warn participants when
requirements for critical functions are not met.
20
Virtual meeting maker
The system must be able to schedule, establish, and record virtual meetings. It must
support schedule conflict resolution, scribing, attendance authentication, and archiving. In
addition, the system must support asynchronous access and coordination for meeting
absentees who access archived meeting materials.
Security and privacy
Security and privacy tools must be able to handle a wide range of requirements such as
authorization, end-to-end key management, and revocation of authorization. In addition, the
system must be able to support advanced security features such as allowing selective
anonymous collaborators to participate and retrospectively access archived collaboration
materials.
Other features
The system will also need to support a variety of additional capabilities such as shared and
private workspaces, an “electronic whisper” capability that allows two collaborators to hold a
private conversation during a collaboration session, and language translation.
21
2.5 Networked Medical Care
In the future, networks will support expert medical care, including surgery, delivered to
patients in remote and mobile locations on line, in real time, and collaboratively in a highly
secure, intelligent, dynamic, and reliable environment. Additionally, doctors will access
distributed medical records and medical expertise wherever it is located.
2.5.1 Medical Scenario Description
A middle-aged man at home begins to suffer chest pains. He uses a medical sensor to
take automated medical readings that are relayed to a medical center that determines he is
having a heart attack. In an ambulance dispatched to take him to the hospital, sensors
monitor his vital signs and cardiac function. A remote cardiologist monitors these data and
accesses the patient’s medical records. She orders an angiogram to be taken when the patient
reaches the hospital. The angiogram shows a possible anomaly and a remote consultant is
shown the angiogram over the Internet. The angiogram displays a warning that the
resolution of the image, as delivered by the Internet, does not meet the standard required for
angiogram interpretation.
Heart surgery is performed on the patient at the hospital. An anomalous cardiac
vasculature found in the patient leads the surgeon to consult an on-line 3-D anatomical
library in real time. The library finds a consultant surgeon who, also in real time, assists in
the operation, occasionally taking control of the haptic surgical robot.
2.5.2 Networked Medical Care Research Needs
Ubiquity
Multimegabit-per-second effective wireless bandwidth from multiple sources is needed.
Bandwidth available during the ambulance ride may occasionally be degraded so the network
should be able to identify networking alternatives, choose the best alternative for the
application, and reconfigure the network.
Trustworthiness
Trustworthiness has many components that collectively assure the end users of the
quality, timeliness, security, and reliability of the services provided by the network:
♦ Security and data integrity: First and foremost, the network must meet legal standards
for medical data privacy and security as currently documented in the Health Insurance
Portability and Accountability Act (HIPAA). This requires the networks to support
authentication of the patient and the end user, authorization for end users, encryption
to support privacy requirements, and traffic diversity to prevent identification of
restricted information through traffic analysis. Authorization and access should be
logged to provide an historical security record. Data security is required for restricted
data and to assure data accuracy and integrity.
22
♦ Quality of Service: QoS is required to support the strict demands of distributed
medical care delivery and collaboration. To support the cardiologist at a remote site,
the wireless channel must provide real-time video and real-time data indicating the
quality of the video display. Although this scenario may tolerate a fair amount of
latency, it will not tolerate jitter and the video, audio, and data channels must be
synchronized. Medical service must be provided across network service boundaries
in a dynamic and sometimes mobile environment. All devices need to support QoS,
and the system must be able to adapt in real time to networking or data content
changes.
Sensors and end user devices
Networks must support dynamic sensors and end user devices that must be identifiable
and locatable. Sessions may migrate from one device to another – for example, migrating
from a fixed end station in a patient’s home to PDAs in an ambulance requires networking
services that support significantly different access interfaces.
Collaboration environments
The networks must support ad hoc establishment of collaboration sessions for specific
access modes, locations, service needs, and networking capabilities. For example, one
participant may need voice-only capability while others may need varying degrees of video,
voice, and whiteboarding. The networks also need to support access to on-line resources
such as distributed computing and database access to support the collaboration. Security,
discussed above, is critical to collaboration environments.
Intelligent networking, end-to-end performance
The medical scenario angiogram procedure illustrates the need for end-to-end knowledge
of the network data path including the end user display devices to assure that angiogram
interpretation standards are met. Thus, an intelligent, scalable network needs to be able to
reconfigure itself and automatically resolve any QoS problem to meet medical standards.
The network must report any unresolvable problem to the participants.
Assured real-time service
For the bypass surgery scenario, the surgeon needs to retrieve 3-D image data sets, each of
which may be several gigabytes in size. The consultant must be able to view the surgery in
real time and accurately guide the surgical robot using its haptic controls. This requires a
network operating at high bandwidth with minimal latency and minimal jitter while
maintaining the security and integrity of the transmissions and the privacy of the patient data.
23
2.6 High-Energy Physics
High-energy physics (HEP) has pushed against the limits of networking and computing
technologies for decades. Twenty years ago, the largest HEP experiment involved 100
physicists from many nations and acquired tens of thousands of magnetic tapes of data per
year; graduate students spent months reading those tapes to perform data queries. Life is not
so different for today’s physicists. The new BaBar detector at the Stanford Linear
Accelerator (SLAC) was designed by a large international collaboration of physicists at 72
institutions. The BaBar collaboration enables hundreds of physicists worldwide to query its
300-terabyte and rapidly growing database in hours or days rather than months. In the next
10 years, the Large Hadron Collider (LHC) experiments at CERN, the European Physics
Laboratory, where some 600 U.S. physicists form the largest national group, will face the
challenge of distributed analysis of hundreds of petabytes of data.
2.6.1 High-Energy Physics Scenario
The physics community greatly values being able to distribute digitized data electronically
at the rate at which it is produced from the site of an experiment to collaborators worldwide
who can analyze them. The HEP community has the goal of using affordable network and
computational resources to provide physicists with transparent access to a distributed data-
analysis system that uses all available resources as efficiently as possible. By 2005 to 2010,
HEP computing will involve queries on databases containing exabytes (10
18
bytes) of data
structured as up to10
16
individually addressable objects. These massive amounts of data will
require the distribution of terabits per second of real-time data to major HEP data analysis
centers.
Challenging networking and other information technology research needed to enable
distribution of data, analysis, and collaboration includes:
♦ Multicast service delivered to multiple remote centers with diverse firewall filters
♦ Network error rate and robustness control without impacting the experiment’s data-
acquisition system
♦ Massive applications software – e.g., 3 million lines of BaBar C++ code
♦ Commercial object database management software
♦ Interfaces of the database with the network and storage
♦ Technology improvements including:
• Computing technologies
• Computer science
• Networking
• Computing system-to-network interfaces
• Fiber technologies
• Data storage
Improvements in HEP applications must be accomplished at minimal incremental costs.
To help contain costs, network engineering labor, required to configure, optimize, and
24
maintain networks, should be minimized by developing automated network engineering and
management.
HEP collaborations are increasingly international in composition. It is difficult to adopt
standards across the resulting international boundaries, so that the implementation of
uniform, collaboration-wide middleware, security, or hardware technologies is almost always
unrealistic. The best that can be achieved is the adoption of a set of protocols and interfaces
to link components that will almost certainly be implemented in different ways.
The international HEP research community is increasingly using Grid technologies, an
integrated suite of services developed with Federal IT R&D funding. The Grid is a set of
middleware tools and capabilities that enable seamless end user access to applications, data
storage, and compute resources to support high-end modeling. The Globus project
(http://www.globus.org/) is one state-of-the-art example of Grid development. Grid
middleware faces many hard computer science problems. Vertical integration of existing
components to provide Grid services to demanding, well-defined communities is essential to
progress on Grid architecture and technologies.
2.6.2 High-Energy Physics Networking and Networking Research Needs
Networking underlies many of the services and applications being developed to support
HEP. Progress in networking is expected to be evolutionary over the next five years, with
revolutionary capabilities being developed over the longer term. The following table
presents the current state of the art in various networking areas supporting HEP, what could
evolve by around 2006, and the requirements to approach meeting the HEP goals. The
current HEP capabilities are what is affordable, not what could be obtained with unlimited
funding.
25
Current HEP Capabilities
Evolution to 2006
HEP Goals
Links Between Major Centers
• 1 or 2 x 155 Mbps
• 10 Gbps
• 1 Tbps
Bulk Transfer Protocol
• TCP/IP + fixes
• TCP/IP + more fixes
• New, widely adopted, transport
protocol
Differentiated Services (CoS, QoS, Mixture of Packet and Circuit Switching, etc.)
• Provide 1.1 differentiated services
(best effort + some Voice over IP
(VOIP))
• Provide 2 differentiated services
• Provide 6 differentiated services that
are application-negotiated, on-demand,
and responsive to cost and policy
Network Measurement, Analysis, Interpretation, and Action and Network Modeling
• Limited measurement, analysis, and
modeling
• More/better measurement and analysis,
and some interpretation
• Models begin to predict non-obvious
failure modes
• Automated measurement, analysis, and
interpretation
• Automated action based on measured
and modeled information
Support for Collaboration
• Some proof-of-concept (PoC)
prototypes
• Some commercial tools
• New PoC prototypes
• Some mature components
• Still incomplete
• Collaborations form via the Internet
• Real sense of working together
Data-Grid: Authentication and Authorization
• Local and manual
• Cross-authentication via proxies
• New approaches to regulating access to
resources
Data-Grid: Information Infrastructure (Replica Catalog, Resource Catalog, Software Catalog, Operation/Task Catalog, etc.)
• Manual and local
• Limited ad hoc automation
• Evolution of Globus by 2+ generations
• Efficient distributed information
management for more than 10
16
virtual
objects using millions of operations
each using millions of lines of code
(MLoC)
Data-Grid: Data Payload Infrastructure (Exabyte Databases, Reliable Replication, Storage Management, etc.)
• Few x 100-Tbyte databases
• PoC replication prototypes
• PoC storage management
• Bleeding-edge 10
19
Byte databases
• Grid replica management
• Grid storage management
• Industry-standard exabyte databases,
replication, and storage management
Data-Grid: Resource Discovery
• Telephone, e-mail
• Telephone, e-mail, partial automation
• Automated discovery
• Standardized information models
Data-Grid: Distributed Resource Management, Distributed Job (Task, Operation) Management
• Local batch systems
• Prototype systems
• Grid job management
• Early distributed resource management
• New approaches to regulating access to
resources
Data-Grid: Virtual Data
• Conceptual phase
• Starting to work for cutting-edge HEP
experiments
• A generally accepted and implemented
paradigm
The Grid an Integrated “Network” Service
• Manually integrated services have been
in use for more than 10 years
• Vertical integration of fabric and data
payload services
• Incomplete information services
• Incomplete resource management services
• Easy creation of vertically integrated,
worldwide information management
and processing systems from standard
industry components
Notes:
1. HEP technologies that work well locally but do not become widely adopted and supported may inhibit collaboration
and prove costly. Qualifiers like “widely adopted, industry-standard,” and “generally accepted” are vitally important.
2. Elegant approaches to authentication and authorization appear to be available for organizations that are part of a single
administrative structure. Worldwide collaborations seem unlikely ever to fit this model. Discussion identified that a totally
new approach to regulating access to resources might foster more open scientific research.
26
The Role of Industry in HEP Networking R&D
Wherever possible, high-end science takes advantage of capabilities that are developed
and commercialized by industry. For example, the HEP community has benefited from cost
reductions and reliability increases provided by industrial commercialization of individual
middleware components, such as databases and well-defined information systems. Also, the
HEP community has benefited from the availability of commercial high-end computing
systems, high-bandwidth networks, and extensive middleware. It is likely that higher
bandwidth will be more affordable in the future due to economies of scale, greater supply,
and competition among providers. Carriers are beginning to make individual wavelengths
available to major customers. Affordable links between major HEP computer centers should
exceed 10 Gbps within five years and may approach 1 Tbps in less than a decade. However,
it is likely to be difficult to exploit the available bandwidth using industry-standard transport
protocols. TCP/IP requires fixes such as multiple streams to use today’s affordable
bandwidth. Additional fixes will be needed to accommodate the expected increases in
numbers of users, number of nodes, and network traffic. It is possible to develop a new
protocol or to extend TCP to work over dedicated links, but the extensive investment of
industry and users in the current protocols would likely hinder acceptance of alternatives.
Workshop participants identified a need for a vertically integrated HEP solution for
managing and processing the massive amounts of data expected from HEP experiments.
Networking research, development of faster computing systems and more capable
computational algorithms, and commercial development and marketing (productization)
together deliver components that provide part of this vertically integrated HEP solution.
New component technologies emerging from networking research and computer science are
funded normally only to the proof-of-concept stage and fall short of the level of product
hardening and support needed to provide technologies that can be reliably integrated into a
complex operational system. Collaboration by network researchers, computer scientists, and
application scientists required to provide vertical integration of the component capabilities
are also research and development and, in the view of the workshop participants, should be
funded by the Federal IT R&D funding agencies.
The HEP community is rapidly taking advantage of the Grid infrastructure to enable
transparent, distributed, and international collaborations, resulting in improvements in the
ability to cooperatively carry out science and to analyze increasingly large volumes of HEP
data. However, the Grid primarily has been developed in universities and industry is
currently largely decoupled from development of an integrated Grid capability. Thus, Grid
software and infrastructure have not benefited from the standardization, cost reductions, and
increased reliability often provided by commercial productization. This productization will
take place only if industry perceives the potential for profitably marketing the technologies.
Federal funding could help bridge the gap between the proof-of-concept prototype and the
point at which successful vertical integration has demonstrated commercial viability.
27
Section 3.0 Summary of Networking Research Needs
This section summarizes the networking research needs identified in the Section 2.0
scenarios organized by research categories.
3.1 Adaptive, Dynamic, and Smart Networking
Most of the breakout sessions identified a need for research into elements of adaptive and
dynamic networking to support ad hoc and mobile wireless access. The discussions of zero-
casualty war and the medical applications scenarios identified the need to dynamically
respond to developing situations with ad hoc, high-assurance networks supporting secure
multimedia capabilities. In these scenarios, not only are the situations changing but also the
participants in the networking sessions are changing with resulting changes in service
requirements such as networking services, security levels, and end user devices to be
supported.
Ad hoc networking to support deployable sensors for on-site chemical or temperature
monitoring will require knowing the locations of the sensors and organizing them into a
network capable of meeting requirements such as cost, location precision, measurement
capability, power, and networking capabilities. Research is needed on self-configuration,
connectivity to existing infrastructure, organization, and adaptation. Tradeoffs among
functionality, performance, and cost will need to be understood and managed. For example,
data aggregation and compression within the sensors may reduce communication
requirements but increase sensor costs. Aggregation and compression may also affect the
quality of the information sent to the monitors since such processing often changes the
informational content and precision.
Future networks will be orders of magnitude more complex than current networks and
must be able to respond to changing environments and dynamic networking as sensor
elements are added and deleted. For example, large sensor arrays will be subject to sensor
attrition that will require adaptive, dynamic, and smart networking to maximize the
effectiveness of the remaining sensors. Engineering and managing these networks
increasingly will require incorporating smart elements to automatically respond to the
changing elements and environments. Research should also address the dynamic
trustworthiness of the system and the information it is producing as the sensors and network
change. Network measurement is fundamental to determining the status of networking
elements to provide a basis for smart networking.
Smart networking research is needed for:
♦ Enabling sensors, networks, and applications to work together to increase the range of
data granularity the system responds to and reports; for example, applications may
vary significantly in the precision of a specific data parameter they require, thereby
allowing data, system, or cost tradeoffs
♦ Automatically managing networks of increasing complexity including self-
organizing, self-diagnosing, and self-healing networks
28
♦ Anticipating and automatically responding to network instabilities
♦ Network-aware applications that automatically respond to available networking
resources
♦ Application-aware networks that automatically reconfigure networks to improve
applications support
♦ Adaptive distributed systems: Applications may adapt based on network-provided
information and system feedback, or the network may adapt based on information
provided by the applications, as in implicitly adaptive networks.
3.2 Measurement, Modeling, Simulation, and Scalability
The Internet has expanded at a phenomenal rate, often driven by the need for increased
capacity and capabilities to support new “killer applications” that in the past have included
TCP/IP, e-mail, the Web, and Web browsers. With the continued evolution of Internet-based
applications, types of media transmitted (for example large images and video), increasing
connectivity of embedded devices, and increased support for arrays of sensors, the Internet
over the next 15 years is expected to grow by many orders of magnitude in the number of
nodes connected, amounts of information passed, and the number of users and their usage.
Revolutionary new applications barely foreseen today are expected to lead to even faster
expansion of the Internet and demand for Internet services. Instabilities may appear because
existing Internet technologies and their evolutionary extensions could be severely strained to
cope with this growth. Several of the breakout sessions discussed the need for research to
address the growth, scaling, and stability of the Internet.
Network measurement
Each of these breakout sessions discussed the need for metrics and measurement of
network performance. We do not have standardized technologies for measuring end-to-end
Internet performance, let alone standardized reporting of measurements for most Internet
nodes. This is a fundamental requirement for identifying current and developing bottlenecks
and instabilities and for measuring improvements in performance as new capabilities are
incorporated into the Internet. In the recent past, researchers have consistently observed that
increases in network link bandwidth do not translate into proportional increases in end-to-end
throughput for their applications. Measurement is imperative to study the causes of such
behavior and to support engineering and management of the network links to improve
performance for the end user.
Measurement research needs include:
♦ Intrinsic instrumentation: Make measurement a fundamental part of all systems on
the network
♦ Extensible Application Platform Interfaces (APIs): APIs must provide measurement
details to support network engineering to improve the end-to-end performance of the
application
♦ Data reduction, formatting, and storage: Network measurement data collected in this
environment must be reduced and stored in a format useful to end users. This
29
requires not only efficient data storage but also data synthesis, analysis, and
formatting capabilities.
♦ National networking measurement archives: Provide a national archive to store
network measurement data on a permanent basis. (Individual companies do not have
the incentive to record and archive these data. Individual researchers do not have the
resources required to provide the long-term archival storage.) A wide range of
information should be stored, since we cannot now know what will later prove to be
important.
♦ Ubiquitous inter-domain cooperation: Separate administrative authorities must agree
upon a common set of measurement data that will be made available outside of their
specific domains
♦ Correlation of measurements across levels: Data collected at the network
connectivity, routing, end-to-end, and application levels must be correlated to provide
a complete understanding of the system, to support network modeling and to provide
a basis for network management.
♦ Synchronization of measurements: Measurements made at different levels and in
different logical areas of the network must be time-synchronized to provide an
instantaneous snapshot of network status and performance. The times when
measurements are initiated must be synchronized and the ways timestamps are
applied must be standardized.
♦ Modeling support: The measurement technologies must support network modeling to
predict network failure modes, carry out network design and development, and enable
network management.
♦ Security: The measurement system must support threat-evaluation models used to
configure the network to withstand a wide range of possible attacks.
♦ Privacy: Measurement data must be securely transmitted to assure the privacy of
individuals and administrative entities.
♦ New link types: The measurement and monitoring mechanisms will need to be
adaptable to new link technologies such as optical networking.
Network modeling and simulation
Network modeling is needed to support research on network behavior and network
management as the Internet grows in magnitude and complexity. Modeling is also needed to
understand current network behavior and predict future behavior for assessing how new
technologies will affect the stability of the Internet as they are introduced.
Network scalability
The network is expected to grow potentially by orders of magnitude, in the numbers of
nodes, the amount of information, and the management complexity of the Internet. Current
network architectures do not scale to handle these increases. A “science” of networked
systems modeling is needed to understand how the Internet is likely to fail under increased
loads, to fix potential problems before they occur, and to develop scalable architectures.
30
3.3 Trust: Security, Privacy, and Reliability
The Internet will be used for commercial, medical, scientific, and other uses only if users
trust its security, privacy, and reliability. All of the workshop’s scenarios inherently relied
on this user trust. With the projected expansion of the Internet and the applications and
media it carries, current issues in developing trust relationships will become more pervasive.
The medical scenario relied on data access, consultations, and real-time collaborations with
high assurance, security, and privacy. The disaster scenario identified the need to use
distributed data and computing resources in near real time to support modeling and
prediction and to support field units. The SWARMS scenario required security to protect
against intrusion or espionage.
In some scenarios, it may be possible to quantify elements of security, privacy, and
reliability associated with network elements or network links. Under these circumstances,
“chains of trust” may be developed such that a user can choose networking paths based on
highest overall trust or use information from specific network nodes that provide the highest
confidence in the end product.
Trust may vary over time. Corroborating information may increase our confidence in
information from some nodes or the networking architecture may change to a more reliable
or secure pathway.
Security, privacy, and reliability research needs include:
♦ Quality of Service for critical applications in a complex environment that includes
multiple providers, mobile and distributed access, and multimedia service (for
example, for collaborations)
♦ Security, privacy, and reliability in dynamic, complex, and heterogeneous systems
♦ Scalability to accommodate heterogeneous environments and changing needs and
hierarchies
♦ Trust modeling, configuring for trust, and responding to changing trust over time
♦ Trust retractability
3.4 Networking Applications
Each of the workshop scenarios relies on multiple networking applications. The
workshop participants stressed the importance of developing these applications. Some of the
applications identified in the scenarios that requiring networking research include:
♦ Telemedical remote collaboration with high assurance and security
♦ Sensornet: Self-organizing, dynamic, heterogeneous networks of sensors with
network connectivity to remote resources
♦ Collaboratories: Support for interactions that are natural, intelligent, and secure, with
multimedia capabilities and automated configuration
♦ Grid
31
♦ Hierarchical data delivery: Automatically develop and deliver data tailored to
differing levels of an hierarchy
The workshop participants indicated that some of the potentially largest uses of the
Internet will be for revolutionary applications not yet developed.
3.5 Networking Middleware
Networking provides connectivity among sensors, applications, end users, and distributed
resources such as data repositories and computing facilities. Middleware assures that these
elements work within a coordinated, transparent, and synchronized framework to provide end
user services. Middleware can, for example, provide transparency among network service
providers to seamlessly and securely transport information. Most of the workshop breakout
sessions addressed the need to develop new middleware capabilities for networking.
The needs for enabling the Grid application illustrate many of the middleware networking
needs, including:
♦ Vertically integrated, transparent, worldwide infrastructure for managing data and
information, distributed storage, and access to computational resources
♦ Automated discovery of resources
Additional middleware needs identified in the scenarios include:
♦ Automated collaboratory setup, services, and toolsets
♦ Seamless, transparent service across heterogeneous network elements
♦ Control and management of dynamic networks
♦ Management of security, privacy, and reliability in a dynamic environment
♦ Automated measurement
♦ Productization to harden and standardize software by commercial developers
3.6 Testbeds
Workshop breakout session participants cited the need for testbeds to support networking
research in performance measurement, security, privacy, reliability, active networking,
adaptive mobile networks, intelligent networking, applications, and middleware. They also
discussed the need for testbeds to bridge the transition from the research stage to successful
commercialization of the technologies. An example is a Grid for high-energy physics
research. Industrial participation in testbeds is often needed to develop and refine standards
and to promote technology transfer.
3.7 Collaboration Environments
Networks support human interactions including human-to-human interactions such as
collaborations and human-to-machine interactions such as access to distributed resources.
Most of the breakout groups discussed the need for collaborations to be as good as face-to-
32
face meetings or to have enhanced capabilities such as immersive environments or automatic
translations, for example for international collaborations.
Some collaboration environment capabilities that the Internet should support include:
♦ Ubiquitous access with a plug-and-play capability
♦ Automatic configuration to accommodate the personal preferences and characteristics
of the participants and the heterogeneity of their environments (including extreme
differences such as PDA versus CAVE environments)
♦ Authentication, authorization, security, privacy, and access control
♦ Resource-sharing with remote collaborators
♦ Natural and intuitive interactions supported by virtual, immersive, and integrated
environments that provide body language, visual, audio, textual, haptic, and olfactory
capabilities
♦ Language translation
♦ Large-scale on-line virtual and physical models
♦ Expert consultation
♦ Whisper mode (support for side conversations)
♦ Automated supervisory oversight
3.8 Revolutionary Research
This report has identified many research areas that are important in assuring the future
growth, functionality, robustness, and usability of the Internet. Evolutionary networking
research is expected to result in improvements in these areas. However, high-risk
revolutionary research may provide unexpected dramatic improvements that accelerate the
capacity to meet the growing networking needs. Revolutionary research comes from
revolutionary visions of research groups or individuals, adaptation of research from widely
different disciplines, interdisciplinary collaborations, and other research initiatives.
Some areas of networking are in need of revolutionary research. For example,
revolutionary research is needed to address the scalability issues that will be increasingly
critical with the projected orders of magnitude increases in the number of network nodes,
network users, and network traffic. Revolutionary research is needed to understand network
behavior with these order of magnitude increases and to study networked systems’
complexity. Disciplines such as chaos theory, economics, catastrophe theory, stochastic
processes, and generalized control theory may contribute to these complexity studies.
3.9 Revisit Networking Fundamentals
The Internet is based on fundamental concepts, technologies, and standards such as the
TCP/IP protocol that were developed and implemented decades ago. These standards and
technologies have provided a robust infrastructure for the phenomenal Internet growth we
have experienced since then, and which was not foreseen when they were developed. They
may not be able to meet the still growing demands. For example, the Internet growth may
33
exceed their ability to scale. Or some new technologies or a new protocol may provide
greater efficiency, robustness, or ability to scale.
In 10 to 15 years, the core backbone network could well be Dense Wave Division
Multiplexing (DWDM) optical with thousands of wavelengths per fiber. Broadly deployed
network access will certainly be heterogeneous, incorporating broadband wireless, satellite,
broadband wireline, and optical fiber with multiple wavelengths. Protocols and their
associated services need to extend across these heterogeneous technologies with end-to-end
transparent functionality. Fundamental changes may be required in addressing, routing,
forwarding, and transport to support this increased scale and functionality. Revolutionary
research, such as revisiting TCP/IP, can address basic issues of protocols, performance,
complexity, and scalability.
34
4.0 Federal Networking Research is Needed: Industry Isn’t Going to Do It
Federal networking research is needed to enable the scenarios described in this report and
to provide for the long-term growth and viability of the Internet. Industry is focused on
developing the commercial technologies required for the near-term (one to three years)
growth of the Internet. Industry is not focused on the research needed for scalability,
management, and improved services in networking required for the expected long-term
Internet growth. The trend is for industry to conduct even less networking research. Indeed,
many industrial research facilities, including Lucent Laboratories and Xerox Parc, are slated
to be shut down by their parent organizations.
Current commercial networking technologies are largely based on technologies developed
under long-term research funded by Federal agencies. Without continuing Federal funding
of basic research in new networking technologies, such as optical networking, scalable
protocols, active networking, dynamic networking, and intelligent networking, the pipeline of
basic networking technologies needed to support further expansion of the Internet economy
and the scenarios described in this report will not be available.
The Federal agencies engaged in networking research have an impressive record of
supporting basic networking research and transitioning the results of this research to the
industrial sector. Many networking technologies and applications are developed, refined, and
tested in Federally funded testbeds with the active participation of industrial partners, who
often contribute equipment, labor, and other resources. This Federal/industry partnership
leverages Federal research funding and provides direct commercial experience in the
developing technologies, thereby hastening technology transfer. This partnership is expected
to continue for the research identified in this report.
35
Appendices
1. Invitation to submit white papers, January 5, 2001
2. List of LSN workshop attendees
Scenarios
3. Zero Casualty War Scenario
4. Deeply Networked World/SWARMS scenario (Smart World Airforce Repair and
Maintenance System) Scenario
5. Crisis Management Scenario
6. Collaboration Scenario
7. Networked Medical Care Scenario
8. High Energy Physics Scenario
36
Appendix 1: Invitation to Submit White Papers, January 5, 2001
Interagency Working Group for Information Technology Research and Development
(ITRD)
Large Scale Networking (LSN) Coordinating Group
Call for White Papers
Workshop on New Visions for Large-Scale Networks: Research and Applications
March 12 - 14, 2001
Vienna, VA
Paper Submission Deadline February 4, 2001
The phenomenal worldwide explosion in global networks and widespread dissemination
of advanced technology is rooted in thirty years of patient investment by federal R&D
agencies. As commercial investment has poured into this area, the research focus has
naturally responded by seeking solutions to near-term problems. The goal of this workshop
is to stimulate bold thinking that will take us off this evolutionary path and to explore new
directions that could revolutionize future networks and applications.
Researchers from related disciplines are invited to share their perspectives in helping to
define a broad research agenda for the future of networking and distributed applications. The
goal will be to envision and identify networking technology needs and possibilities that
would revolutionize the way we live and work in the decades ahead, but that are out of scope
for today's profit-driven R&D programs. The academic, industrial and governmental
research communities are invited to submit white papers that describe radically new visions
for the future, as well as possible steps to realization. Accepted papers will form the basis for
panel discussions and presentations, and will also be used to inform development of long-
term research programs within the sponsoring organizations.
Of particular interest are submissions that elucidate an exciting new area of research that
is radically forward-looking and that holds the potential to yield unexpected results. For any
given focus area, quantitative explication of critical research barriers and limitations of
existing approaches are required. Authors are encouraged to bridge the gap between the
broad scale vision and the specific technologies, however difficult to achieve, needed for
ultimate realization of the vision. For example, one may include specific research findings
(original or cited) that may indicate expansive future successes or one may delineate
assumptions that are made with regard to advances in supporting or enabling technologies.
Alternatively, authors may also choose to structure the paper in the form of a proposal
abstract to be submitted in the year 2005 or beyond. Such alternative submission should
include a description of deliverables (hardware, software, system prototypes, and algorithms)
that would be produced three to four years from the submission date.
37
The total length of the paper should not exceed six pages, including any figures, with
minimum font size of 10 points. The first page (the cover sheet) must show the submission
title, names and contact information for the author(s) and/or a contact person. The cover
sheet should also include an abstract that succinctly describes the main idea, innovative
claims and the critical technical barriers. Submissions must be formatted in Microsoft Word
or Adobe PDF format.
Attendance will be by invitation; some limited support for travel and expenses will be
available for invitees. Papers must be submitted electronically to lsn_workshop@snap.org
by 5pm Sunday, February 4, 2001. Those selected to participate will be notified by February
16, 2001. Information related to this workshop will be posted at:
http://www.eventmakeronline.com/sta/... For further
information, please send email to lsn_workshop@snap.org.
The workshop is sponsored by the Federal Large Scale Networking Coordinating Group,
DARPA, DOE, NASA, NIST, NLM and NSF.
38
Appendix 2: List of LSN Workshop Attendees
Last Name
First
Name
Organization
Email
Ackerman
Michael
NIH/NLM
ackerman@nlm.nih.gov
Agarwal
Deb
Lawrence Berkeley Laboratory
daagarwal@lbl.gov
Agarwal
Sharad
University of California -
Berkeley
sagarwal@eecs.berkeley.edu
Agrawala
Ashok
University of Maryland
agrawala@cs.umd.edu
Ahmed
Mohin
HRL Laboratories
mohin@hrl.com
Almes
Guy
Internet2
almes@advanced.org
Aronson
Jules
NIH/NLM
aronson@nlm.nih.gov
Banerjee
Suman
University of Maryland
suman@cs.umd.edu
Barford
Paul
University of Wisconsin
pb@cs.wisc.edu
Bauer
Steve
Massachusetts Institute of
Technology
bauer@lcs.mit.edu
Bernholz
David
NCO/ITR&D
bernholz@itrd.gov
Bhasin
Kul
NASA Glenn Research Center
kbhasin@grc.nasa.gov
Blumenthal
Marjory
Computer Science &
Telecommunications Board
mblument@nas.edu
Boroumand
Javad
Cisco Systems
jborouma@cisco.com
Bradaric
Ivan
Drexel University
ivan.bradaric@drexel.edu
Braden
Bob
University of Southern California
– Information Sciences Institute
braden@isi.edu
Brett
George
NLANR / Web100
ghb@ncsa.uiuc.edu
Brown
Bruce
Institute for Defense Analyses
bkbrown@ida.org
Burns
Catherine University of Waterloo
c4burns@engmail.uwaterloo.ca
Bush
Aubrey
NSF
abush@nsf.gov
Bush
Stephen
GE Corporate R & D
bushsf@crd.ge.com
Calvin
Jim
MIT – Lincoln Laboratory
jcalvin@ll.mit.edu
Carlson
Rich
Argonne National Laboratory
racarlson@anl.gov
Carter
Bob
Honeywell Laboratories
carter_robert@htc.honeywell.com
Catlett
Charlie
Argonne National Laboratory
catlett@mcs.anl.gov
Claffy
KC
CAIDA
kc@caida.org
Clark
Dave
MIT
ddc@lcs.mit.edu
Cohen
Danny
CNRI
dannycohen@ieee.org
Corbato
Steve
UCAID/Internet2
corbato@internet2.edu
Cox
Chip
NSF/CISE/ANIR
chip@cox.net
Dao
Son
HRL Laboratories
skdao@hrl.com
Das
Sajal K.
University of Texas at Arlington das@cse.uta.edu
39
desJardins
Dick
NASA Research & Education
Network
rdesjardins@arc.nasa.gov
Dev
Parvati
Stanford University
parvati.dev@stanford.edu
Diersen
Dave
Chief of Naval Operation's
Strategic Studies Group
diersend@nwc.navy.mil
Domich
Paul
Office of Science & Technology
Policy
pdomich@ostp.eop.gov
Durst
Robert
The MITRE Corporation
durst@mitre.org
Echiverri
Kathy
Institute for Defense Analysis
kechiver@ida.org
Edwards
Hal
Nortel Networks
edwardsh@nortel.com
Eisenberg
Jon
Computer Science &
Telecommunications Board
jeisenbe@nas.edu
ElBatt
Tamer
HRL Laboratories
telbatt@wins.hrl.com
Ephremides
Anthony
University of Maryland
tony@eng.umd.edu
Evans
Joe
University of Kansas
evans@ittc.ku.edu
Feng
Wu-chang Oregon Graduate Institute
wuchang@cse.ogi.edu
Feng
Wu-chi
Ohio State University
wuchi@cis.ohio-state.edu
Feng
Wu-chun Los Alamos National Laboratory
& Ohio State University
feng@lanl.gov
Fisher
Darleen
NSF
dlfisher@eecs.berkeley.edu
Fleming
Robert
Aether Wire & Location, Inc.
bob@aetherwire.com
Foster
Ian
Argonne National Laboratory
foster@mcs.anl.gov
Foster
Mark
NASA/NREN
mafoster@arc.nasa.gov
Freeman
Ken
NASA Ames Research Center
kfreeman@arc.nasa.gov
Frost
Victor
University of Kansas
frost@eecs.ukans.edu
Furlani
Cita
NCO/ITR&D
furlani@itrd.gov
Gary
Pat
NASA Goddard Space Flight
Center
pat.gary@gsfc.nasa.gov
Gilliam
David
NASA/Jet Propulsion Laboratory david.gilliam@jpl.nasa.gov
Golubchik
Leana
University of Maryland
leana@cs.umd.edu
Govindan
Ramesh
University of Southern California
– Information Sciences Institute
govindan@isi.edu
Greene
Tom
NSF/CISE/ANIR
tgreene@nsf.gov
Griggs
Kathleen Puritan Research Corp.
kgriggs@puritanresearch.com
Griggs
Steve
Multi Spectral
sgriggs@multispectral.com
Gritter
Mark
Stanford University
mgritter@dsg.stanford.edu
Hayward
Gary
Telcordia Technologies
gah@lts.ncsc.mil
Hollebeek
Bob
University of Pennsylvania
bobh@nscp.upenn.edu
Howe
Sally
NCO/ITR&D
howe@itrd.gov
Hughes
Larry
Dalhousie University
lhughes2@is.dal.ca
Ingle
Jeff
Intelligence Community CIO Staff jeffeti@odci.gov
40
Irwin
Basil
National Center for Atmospheric
Research
irwin@ncar.ucar.edu
Izadpanah
Hossein
HRL Laboratories
hizad@hrl.com
Jackson
Deborah
NASA/Jet Propulsion Laboratory deborah.j.jackson@jpl.nasa.gov
Jannotti
John
MIT
jj@lcs.mit.edu
Joa-ng
Mario
Telcordia Technologies
mjoang@research.telcordia.com
Johnson
Marjory
RIACS/NASA Ames Research
Center
mjj@riacs.edu
Jones
Kevin
NASA Ames Research Center
kjones@arc.nasa.gov
Kandlur
Dilip
IBM TJ Watson Research Center kandlur@us.ibm.com
Khan
Javed
Kent State University
javed@kent.edu
Kind
Pete
Institute for Defense Analysis
pkind@ida.org
Kittka
Kevin
Science & Technology Associates kkittka@snap.org
Konishi
Kazunori Asia-Pacific Advanced
Networking
konishi@jp.apan.net
Koob
Gary
DARPA/ITO
gkoob@darpa.mil
Kovar
David
Western Disaster Center
kovar@webnexus.com
Kulik
Joanna
MIT
jokulik@lcs.mit.edu
Kulkarni
Amit
GE Corporate R & D
kulkarni@crd.ge.com
Kumar
Mohan
University of Texas at Arlington kumar@cse.uta.edu
Kumar
Rakesh
Teddy
Sarnoff Corporation
rkumar@sarnoff.com
Kumar
Sri
DARPA/ITO
skumar@darpa.mil
Kushner
Cherie
Aether Wire & Location, Inc.
cherie@aetherwire.com
Larsen
Ron
MAITI
rlarsen@deans.umd.edu
Lehman
Thomas
University of Southern California,
Information Sciences Institute
tlehman@isi.edu
Lennon
Bill
Lawrence Livermore National
Laboratory
wjlennon@llnl.gov
Liebeherr
Jorg
University of Virginia
jorg@cs.virginia.edu
Lockwood
John
Washington University
lockwood@arl.wustl.edu
Loyall
Joe
BBN Technologies
jloyall@bbn.com
MacKenzie
Robert
Solipsys
robert.mackenzie@solipsys.com
Maeda
Mari
DARPA/ITO
mmaeda@darpa.mil
Mandrekar
Ishan
Drexel University
ishan@io.ece.drexel.edu
Mankin
Allison
University of Southern California
– Information Sciences Institute
mankin@isi.edu
Marquis
Jeff
University of Texas at Arlington marquis@prismpti.com
Mathis
Matt
Pittsburgh Supercomputer Center mathis@psc.edu
Maughan
Doug
DARPA/ATO
dmaughan@darpa.mil
McFarland Jr. Ray
Laboratory for
Telecommunication Science
rimcfar@afterlife.ncsc.mil
Miller
Grant
NCO/ITR&D
miller@itrd.gov
41
Mills
Dave
University of Delaware
mills@eecis.udel.edu
Mills
Kevin
NIST
kmills@nist.gov
Minden
Gary
University of Kansas
gminden@ittc.ukans.edu
Mishra
Amitabh
Virginia Tech
mishra@vt.edu
Monaco
Gregory
NSF
greg@greatplains.net
Montgomery Doug
NIST
dougm@nist.gov
Mosse
Daniel
University of Pittsburgh
mosse@cs.pitt.edu
Mouchtaris
Petros
Telcordia Technologies
p