Scali MPI ConnectTM Users Guide Software release 4.
Acknowledgement The development of Scali MPI Connect has benefited greatly from the work of people not connected to Scali. We wish especially to thank the developers of MPICH for their work which served as a reference when implementing the first version of Scali MPI Connect. The list of persons contributing to algorithmic Scali MPI Connect improvements is impossible to compile here. We apologize to those who remain unnamed and mention only those who certainly are responsible for a step forward.
SCALI “BRONZE” SOFTWARE CERTIFICATE (hereinafter referred to as the “CERTIFICATE”) issued by Scali AS, Olaf Helsets Vei 6, 0619 Oslo, Norway (hereinafter referred to as “SCALI”) DEFINITIONS - “SCALI SOFTWARE” shall mean all contents of the software disc(s) or download(s) for the number of nodes the LICENSEE has purchased a license for (as specified in purchase order/invoice/order confirmation or similar) including modified versions, upgrades, updates, DOCUMENTATION, additions, and copies of software.
- “CANCELLATION PERIOD” shall mean the period between SHIPPING DATE AND INSTALLATION DATE, or if installation is not carried out, the period of 30 days after SHIPPING DATE, counted from the first NORWEGIAN WORKING DAYS after SHIPPING DATE. - “US WORKING DAYS” shall mean Monday to Friday, except USA Public Holidays. - “US BUSINESS HOURS” shall mean 9.00 AM to 5.00 PM Eastern Standard Time. - “NORWEGIAN WORKING DAYS” shall mean Monday to Friday, except Norwegian Public Holidays.
www.scali.com/download free of charge. The Licensee may request such new REVISIONS and BUG FIXES of the RELEASE, and supplementary material thereof, made available on CD-ROM or paper upon payment of a media and handling fee in accordance with SCALI’s pending price list at the time such order is placed. The above maintenance services may, in certain cases, be excluded from the order placed by non-commercial customers, as defined by SCALI.
III SCALI SERVICES TERMS SCALI BRONZE SOFTWARE MAINTENANCE AND SUPPORT SERVICES Unless otherwise specified in the purchase order placed by the LICENSEE, SCALI shall provide SCALI BRONZE SOFTWARE MAINTENANCE AND SUPPORT SERVICES in accordance with its maintenance and support policy as referred to in this Clause and the Clause “SCALI’s Obligations” hereunder, which includes error corrections, RELEASES, REVISIONS and BUG FIXES to the RELEASE of the SCALI SOFTWARE.
related to, referring to or caused by SCALI SOFTWARE, then the LICENSEE shall pay SCALI’s standard commercial time rates for all off-site and eventually any on-site services provided plus actual travel and per diem expenses relating to such services.
fully obliged by the terms and conditions set out in this CERTIFICATE and SCALI’S prior written approval of the transfer. SCALI’s approval shall anyway be deemed granted unless contrary notice is sent from SCALI within 7 NORWEGIAN WORKING DAYS from receipt of notification of the transfer in question from the LICENSEE. Upon transfer, LICENSEE must deliver the SCALI SOFTWARE, including any copies and related documentation, to the Transferee.
Nothing in this CERTIFICATE shall be construed as; - a warranty or representation by SCALI as to that anything made, used, sold or otherwise disposed of under the license granted in the CERTIFICATE is or will be free from infringement of patents, copyrights, TRADEMARKS, industrial design or other INTELLECTUAL PROPERTY RIGHTS ; or - an obligation by SCALI to bring or prosecute or defend actions or suits against third parties for infringement of patents, copyrights, trade-marks, industrial designs or other
No action, whether in contract or tort (including negligence), or otherwise arising out of or in connection this CERTIFICATE may be brought more than six months after the cause of action has occurred. Termination.
No term or provision hereof shall be deemed waived and no breach excused unless such waiver or consent shall be in writing and signed by the party claimed to have waived or consented. Governing Law This CERTIFICATE shall be governed by and construed in accordance with the laws of Norway, with Oslo City Court (Oslo tingrett) as proper legal venue. Scali MPI Connect Release 4.
Scali MPI Connect Release 4.
Table of contents Chapter 1 Introduction .................................................................... 5 1.1 Scali MPI Connect product context ......................................................................5 1.2 Support...........................................................................................................6 1.2.1 Scali mailing lists.......................................................................................6 1.2.2 SMC FAQ ..........................................
3.2.6 Notes on compiling with MPI-2 features ...................................................... 23 3.3 Running Scali MPI Connect programs................................................................. 23 3.3.1 Naming conventions................................................................................. 23 3.3.2 mpimon - monitor program....................................................................... 24 3.3.3 mpirun - wrapper script ..........................................................
5.3.1 How to get expected performance.............................................................. 48 5.3.2 Memory consumption increase after warm-up.............................................. 49 5.4 Collective operations ....................................................................................... 49 5.4.1 Finding the best algorithm ........................................................................ 50 Appendix A Example MPI code .......................................................
Scali MPI Connect Release 4.
Chapter 1 Introduction This manual describes Scali MPI Connect (SMC) in detail. SMC is sold as a separate stand-alone product, with an SMC distribution, and integrated with Scali Manage in the SSP distribution. Some integration issues and features of the MPI are also discussed in the Scali Manage Users Guide, the user's manual for Scali Manage. This manual is written for users who have a basic programming knowledge of C or Fortran, as well as an understanding of MPI. 1.
Section: 1.2 Support CPU-intensive parallel applications are programmed using a programming library called MPI (Message Passing Interface), the state-of-the-art library for high performance computing. Note that the MPI library is NOT described within this manual; MPI is defined by a standards committee, and the API, along with guides for its use is available free of charge on the Internet.
Section: 1.3 How to read this guide 1.2.6 Licensing SMC is licensed using Scali license manager system. In order to run SMC a valid demo or a permanent license must be obtained. Customers with valid software maintenance contracts with Scali may request this directly from license@scali.com. All other requests, including DEMO licenses, should be directed to sales@scali.com. 1.2.
Section: 1.4 Acronyms and abbreviations Abbreviation Meaning IA64 Instruction set Architecture 64 Intel 64-bit architecture, Itanium, EPIC Infiniband A high speed interconnect standard available from a number of vendors MPI Message Passing Interface - De-facto standard for message passing Myrinet™ An interconnect developed by Myricom. Myrinet is the product name for the hardware. (See GM).
Section: 1.5 Terms and conventions 1.5 Terms and conventions Unless explicitly specified otherwise, gcc (gnu c-compiler) and bash (gnu Bourne-Again-SHell) are used in all examples. Term Description.
Section: 1.6 Typographic conventions Scali MPI Connect Release 4.
Chapter 2 Description of Scali MPI Connect This chapter gives the details of the operations of Scali MPI Connect (SMC). SMC consists of libraries to be linked and loaded with user application program(s), and a set of executables which control the start-up and execution of the user application program(s). The relationship between these components and their interfaces are described in this chapter.
Section: 2.2 SMC network devices Figure 2-1: illustrates how applications started with mpimon have their communication system established by a system of daemons on the nodes. This process uses TCP/IP communication over the networking Ethernet, whereas optional high performance interconnects are used for communication between processes. Parameter control is performed by mpimon to check as many of the specified options and parameters as possible.
Section: 2.2 SMC network devices library, which in turn may (e.g. Myrinet or SCI) or may not require a kernel driver (e.g. TCP/IP). These provider libraries provide a network device to SMC. 2.2.1 Network devices There are two basic types of network devices in SMC, native and DAT. The native devices are built-in and are neither replaceble nor upgradable without replacing the Scali MPI Connect package.
Section: 2.2 SMC network devices 2.2.3.2 DET Scali has developed a device called Direct Ethernet Transport (DET) to improve Ethernet performance. This device that bypasses the TCP/IP stack and uses raw Ethernet frames for sending messages. These devices are bondable over multiple Ethernets. The /opt/scali/sbin/detctl command provides a means of creating and deleting DET devices. /opt/scali/bin/detstat can be used to obtain statistics on the devices. 2.2.3.
Section: 2.2 SMC network devices • • root# detstat -r det0 # reset statistics for the det0 device. root# detstat -r -a # resets statistics for all DET devices. 2.2.4 Myrinet 2.2.4.1 GM This is a RDMA capable device that uses the Myricom GM driver and library. A GM release above 2.0 is required. This device is straight forward and requires no configuration other than the presence of the libgm.so library in the library path (see /etc/ld.so.conf). Note: Myricom GM software is not provided by Scali.
Section: 2.3 Communication protocols on DAT-devices 2.2.6 SCI This is a built-in device that uses the Scali SCI driver and library (ScaSCI). This driver is for the Dolphin SCI network cards. Please see the ScaSCI Release Notes for specific requirements. This device is straight forward and requires no configuration itself, but for multi-dimensional toruses (2D and 3D) the Scali SCI Management system (ScaConf) needs to be running somewhere in your system.
Section: 2.3 Communication protocols on DAT-devices Figure 2-4: Resources and communication concepts in Scali MPI Connect 2.3.2 Inlining protocol With the in-lining protocol the application’s data is included in the message header. The inlining protocol utilizes one or more channel ringbuffer entries. 2.3.3 Eagerbuffering protocol The eagerbuffering protocol is used when medium-sized messages are to be transferred.
Section: 2.4 Support for other interconnects 2.3.5 Zerocopy protocol The zerocopy protocol is special case of the transporter protocol t. It includes the same steps as a transporter except that data is written directly into the receivers buffer instead of being buffered in the transporter-ringbuffer. The zerocopy protocol is selected if the underlying hardware can support it. To disable it, set the zerocopy_count or the zerocopy_size parameters to 0 2.4 Support for other interconnects A uDAPL 1.
Section: 2.5 MPI-2 Features ROMIO is a high-performance, portable implementation of MPI-IO, the I/O chapter in MPI-2 and has become a de-facto standard for MPI-I/O (in terms of interface and semantics). ROMIO is a library parallel to the MPI library for the application, but depend on an MPI to set up the environment and do communication. See chapter 3.2.6 for more information on how to compile and link applications with MPI-IO needs. Scali MPI Connect Release 4.
Section: 2.5 MPI-2 Features Scali MPI Connect Release 4.
Chapter 3 Using Scali MPI Connect This chapter describes how to setup, compile, link and run a program using Scali MPI Connect, and briefly discusses some useful tools for debugging and profiling. Please note that the "Scali MPI Connect Release Notes" are also available as a file in the /opt/scali/doc/ScaMPI directory. 3.1 Setting up a Scali MPI Connect environment 3.1.1 Scali MPI Connect environment variables The use of Scali MPI Connect requires that some environment variables be defined.
Section: 3.2 Compiling and linking 3.2.2 Compiler support Scali MPI Connect is a C library built using the GNU compiler. Applications can however be compiled with most compilers, as long as they are linked with the GNU runtime library. The details of the process of linking with the Scali MPI Connect libraries vary depending on which compiler is used. Check the "Scali MPI Connect Release Notes" for information on supported compilers and how linking is done.
Section: 3.3 Running Scali MPI Connect programs 3.2.5 Notes on Compiling and linking on Power series The Power series processors (PowerPC, POWER4 and POWER5) are both 32 and 64 bit capable. There are only 64 bit versions of Linux provided by SUSE and RedHat, and only a 64 bit OS is supported by Scali. However the Power families are capable of running 32 bit programs at full speed while running a 64 bit OS. For this reason Scali supports running both 32 bit and 64 bit MPI programs.
Section: 3.3 Running Scali MPI Connect programs is the Unix process identifier of the monitor program mpimon. is the name of the node where mpimon is running. Note: SMC requires a homogenous file system image, i.e. a file system providing the same path and program names on all nodes of the cluster on which SMC is installed. 3.3.2 mpimon - monitor program The control and start-up of an Scali MPI Connect application are monitored by mpimon.
Section: 3.3 Running Scali MPI Connect programs This control over placement of processes can be very valuable when application performance depends on all the nodes having the same amount of work to do. 3.3.2.3 Controlling options to mpimon The program mpimon has a multitude of options which can be used for optimising SMC performance. Normally it should not be necessary to use any of these options. However, unsafe MPI programs might need buffer adjustments to solve deadlocks.
Section: 3.3 Running Scali MPI Connect programs By default the processes’ output to stdout all appear in the stdout of mpimon, where they are merged in some random order. It is however possible to keep the outputs apart by directing them to files that have unique names for each process. This is accomplished by giving mpimon the option -separate_output , e.g., -separate_output all to have each process deposit its stdout in a file.
Section: 3.3 Running Scali MPI Connect programs For each MPI process SMC will try to establish contact with each other MPI process, in the order listed. This enables mixed interconnect systems, and provides a means for working around failed hardware. In a system interconnect where the primary interconnect is Myrinet, if one node has a faulty card, using the device list in the example, all communication to and from the faulty node will happen over TCP/IP while the remaining nodes will use Myrinet.
Section: 3.4 Suspending and resuming jobs -part -q -t : all (default), none, or MPI-process number(s). Use nodes from partition Keep quiet, no mpimon printout. test mode, no MPI program is started Parameters not recognized are passed on to mpimon. 3.4 Suspending and resuming jobs From time to time it is convenient to be able to suspend regular jobs running on a cluster in order to allow a critical, maybe real-time job to be use the cluster.
Section: 3.7 Debugging and profiling As this feature is limited to tcp communication only, it will not have any effect when using native RDMA drivers such as Infiniband or Myrinet. Note that the combination of tfdr and failover mode is not supported in this version of Scali MPI Connect. Data errors will be logged using the standard syslog mechanism. 3.7 Debugging and profiling The complexity of debugging programs can grow dramatically when going from serial to parallel programs.
Section: 3.7 Debugging and profiling 3.7.2 Built-in-tools for debugging Built-in tools for debugging in Scali MPI Connect covers discovery of the MPI calls used through tracing and timing, and an attachment point to processes that fault with segmentation violation. The tracing and timing is covered in Chapter 4. 3.7.2.
Section: 3.8 Controlling communication resources 3.8 Controlling communication resources Even though it is normally not necessary to set buffer parameters when running applications, it can be done, e.g., for performance reasons. Scali MPI Connect automatically adjusts communication resources based on the number of processes in each node and based on pool_size and chunk_size. The built-in devices SMP and TCP/IP use a simplified protocol based on serial transfers.
Section: 3.9 Good programming practice with SMC 3.9 Good programming practice with SMC 3.9.1 Matching MPI_Recv() with MPI_Probe() During development and testing of SMC, Scali has come across several application programs with the following code sequence: while (...
Section: 3.10 Error and warning messages 3.9.5 Unsafe MPI programs Because of different buffering behavior, some programs may run with MPICH, but not with SMC. Unsafe MPI programs may require resources that are not always guaranteed by SMC, and deadlock might occur (since SMC uses spin locks, these may appear to be live locks). If you want to know more about how to write portable MPI programs, see for example MPI: The complete reference: vol. 1, the MPI core [2].
Section: 3.11 Mpimon options 3.11 Mpimon options The full list of optiona accepted by mpimon is listed below. To obtain the actual values used for a particular run include the -verbose option when starting the application.
Section: 3.11 Mpimon options 3.11.
Section: 3.11 Mpimon options Scali MPI Connect Release 4.
Chapter 4 Profiling with Scali MPI Connect The Scali MPI communication library has a number of built-in timing and trace facilities. These features are built into the run time version of the library, so no extra recompiling or linking of libraries is needed. All MPI calls can be timed and/or traced. A number of different environment variables control this functionality. In addition an implied barrier call can be automatically inserted before all collective MPI calls.
/* find the global sum of the squares */ MPI_Reduce( &my_sum, &sum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD ); /* let rank 0 compute the root mean square */ /* rank 0 broadcasts the RMS to the other nodes */ MPI_Bcast( &rms, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD ); /* perform filtering operation (contrast enhancement) */ /* gather back to rank 0 */ MPI_Gather( recvbuf, my_count, MPI_UNSIGNED_CHAR, pixels, my_count, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD ); /* write image to file */ MPI_Finalize(); } This code uses
-t -x -f -v -h Enable for MPI_calls in . MPI_call = 'MPI_call' | 'call' Disable for MPI_calls in . MPI_call = 'MPI_call' | 'call' Define format: 'timing', 'arguments', 'rate' Verbose Print this list of options By default only one line is written per MPI-call. Calls may be specified with or without the "MPI_"-prefix, and in upper- or lower-case.
0: MPI_Bcast root: 0 Id: 0 my_count = 32768 0: MPI_Scatter Id: 1 1: MPI_Init 1: MPI_Comm_rank Rank: 1 1: MPI_Comm_size Size: 2 1: MPI_Bcast root: 0 Id: 0 my_count = 32768 1: MPI_Scatter Id: 1 1: MPI_Reduce Sum root: 0 Id: 2 1: MPI_Bcast root: 0 Id: 3 0: MPI_Reduce Sum root: 0 Id: 2 0: MPI_Bcast root: 0 Id: 3 1: MPI_Gather Id: 4 1: MPI_Keyval_free 0: MPI_Gather Id: 4 0: MPI_Keyval_free If more information is needed the arguments to SCAMPI_TRACE can be enhanced to request more information.
From time to time it may be desirable or feasible to trace only one or a few of the processes. Specifying the "-p" options offers the ability to pick the processes to be traced. All MPI-calls are enabled for tracing by default. To view only a few calls, specify a "-t " option; to exclude some calls, add a "-x " option. The "-t" will disable all tracing and then enable those calls that match the . The matching is done using "regular-posix-expression"-syntax.
1: 1: 1: 1: 1: 1: 1: 1: 1: 1: 0: 0: 0: 0: 0: 0: 0: 0: 0: 0: 0: 0: 0: MPI_Comm_rank 1 3.1us 3.1us 1 3.1us 3.1us MPI_Comm_size 1 1.5us 1.5us 1 1.5us 1.5us MPI_Gather 1 109.9us 109.9us 1 109.9us 109.9us MPI_Init 1 1.0s 1.0s 1 1.0s 1.0s MPI_Keyval_free 1 1.2us 1.2us 1 1.2us 1.2us MPI_Reduce 1 51.5us 51.5us 1 51.5us 51.5us MPI_Scatter 1 138.7us 138.7us 1 138.7us 138.7us Sum 9 1.0s 112.8ms 9 1.0s 112.8ms Overhead 0 0.0ns 9 27.2us 3.0us ===================================================================== 13.26.
Section: 4.
Section: 4.4 Using the scanalyze user% SCAMPI_TIMING=”-s 10” mpimon .
Section: 4.5 Using SMC's built-in CPU-usage functionality 4.5 Using SMC's built-in CPU-usage functionality Scali MPI Connect has the capability to report wall clock time, and user and system CPU time on all processes with a built-in CPU timing facility. To use SMC's built-in CPU-usage-timing it is necessary first to set the environment variable SCAMPI_CPU_USAGE. The information displayed is collected with the system-call "times"; see man-pages for more information. The output has two different blocks.
Section: 4.5 Using SMC's built-in CPU-usage functionality Scali MPI Connect Release 4.
Chapter 5 Tuning SMC to your application Scali MPI Connect allows the user to exercise control over the communication mechanisms through adjustment of the thresholds that steer which mechanism to use for a particular message. This is one technique that can be used to improve performance of parallel applications on a cluster. Forcing size parameters to mpimon is usually not necessary. This is only a means of optimising SMC to a particular application, based on knowledge of communication patterns.
Section: 5.2 How to optimize MPI performance 5.2 How to optimize MPI performance There is no universal recipe for getting good performance out of a message passing program. Here are some do’s and don’t’s for SMC. 5.2.1 Performance analysis Learn about the performance behaviour of your particular MPI applications on a Scali System by using a performance analysis tool. 5.2.
Section: 5.4 Collective operations 5.3.2 Memory consumption increase after warm-up Remember that group operations (MPI_Comm_{create, dup, split}) may involve creating new communication buffers. If this is a problem, decreasing chunck_size may help. 5.4 Collective operations A collective communication is a communication operation in which a group of processes works together to distribute or gather together a set of one or more values.
Section: 5.4 Collective operations def 4 5 6 7 8 pair4 pipe0 pipe1 safe smp By looping through these alternatives the performance of IS varies: algorithm algorithm algorithm algorithm algorithm algorithm algorithm algorithm algorithm 0: 1: 2: 3: 4: 5: 6: 7: 8: Mop/s Mop/s Mop/s Mop/s Mop/s Mop/s Mop/s Mop/s Mop/s total total total total total total total total total = = = = = = = = = 95.60 78.37 34.44 61.77 41.00 49.14 85.17 60.22 48.
Appendix A Example MPI code A-1 Programs in the ScaMPItst package The ScaMPItst package is installed together with installation of Scali MPI Connect. The package contains a number of programs in /opt/scali/examples with executable code in bin/ and source code in src/. A description of the programs can be found in the README file, located in the /opt/scali/doc/ScaMPItst directory. These programs can be used to experiement with the features of Scali MPI Connect.
/* read the image */ for ( i = 0; i < numpixels; i ++ ) { fscanf( infile, "%u", &buffer ); pixels[i] = (unsigned char)buffer; } fclose( infile ); /* calculate number of pixels for each node */ my_count = numpixels / size; } /* broadcast to all nodes */ MPI_Bcast( &my_count, 1, MPI_INT, 0, MPI_COMM_WORLD ); /* scatter the image */ MPI_Scatter( pixels, my_count, MPI_UNSIGNED_CHAR, recvbuf, my_count, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD ); /* sum the squares of the pixels in the sub-image */ my_sum = 0; for (
} fflush( outfile ); fclose ( outfile ); } } MPI_Finalize(); return 0; } A-2.1 File format The code contains the logic to read and write images in .pgm format.
Appendix B Troubleshooting This appendix offers initial suggestions for what to do when something goes wrong with applications running together with SMC. When problems occur, first check the list of common errors and their solutions; an updated list of SMC-related Frequently Asked Questions (FAQ) is posted in the Support section of the Scali website (http://www.scali.com). If you are unable to find a solution to the problem(s) there, please read this chapter before contacting support@scali.com.
Section: B-1.2 Why can I not start mpid? mpid opens a socket and assigns a predefined mpid port number (see /etc/services for more information), to the end point. If mpid is terminated abnormally, the mpid port number cannot be re-used until a system defined timer has expired. To resolve: Use netstat -a | grep mpid to observe when the socket is released. When the socket is released, restart mpid again. B-1.2.1 Bad clean up V A previous SMC run has not terminated properly.
Appendix C Install Scali MPI Connect Scali MPI Connect can be installed on clusters in one of two ways, either as part of installing clusters from scratch with Scali Manage, or by installing it on each particular node in systems that do not use Scali Manage. In the first case the default when building clusters is to include Scali MPI Connect as well, whereas in the second case the cluster is probably managed with some other suite of tools that do not integrate with Scali MPI Connect.
Section: C-2 Install Scali MPI Connect for TCP/IP To install Scali MPI Connect for TCP/IP, please specify the -t option to smcinstall. No further configuration is needed. C-3 Install Scali MPI Connect for Direct Ethernet To install Scali MPI Connect for Direct Ethernet, please specify the -e option to smcinstall. This option has the following additional syntax : -e : configures DET provider(s). Use comma separated list for channel aggregation.
Section: C-5 Install Scali MPI Connect for Infiniband When installing for InfiniBand you must obtain a software stack from your vendor. The different vendors provide stacks that differs. If you got a binary release, install it before SMC and give the path to the infiniband software to the -b option to smcinstall. Example: root# ./smcinstall -b /opt/Infinicon It is no problem if you install the InfiniBand software after SMC, you only need to modify /opt/scali/etc/ScaMPI.
Section: -n - Specify hostname of Scali license server This option tells the software which host to contact to check out a license. This can also be manually edited by modifying the scalm_net_server parameter in /opt/scali/etc/scalm.conf. -l - Creates a license request to be sent to license@scali.com. Host information from the license server must be included in the license request. Scali MPI Connect is licensed software.
Section: C-11.1 Troubleshooting 3rdparty DAT providers The only requirements are that the libraries have the proper permissions for shared objects, and that the /etc/dat.conf is formatted according to the standard. All available devices are listed with the scanet command. C-11.2 Troubleshooting the GM provider The GM provider provides a network device for each Myrinet card installed on the node, named gm0, gm1... etc.
Section: Scali MPI Connect Release 4.
Appendix D Bracket expansion and grouping To ease usage of Scali software on large cluster configuration, many of the command line utilities have bracket expansion and grouping functionality.
Section: Scali MPI Connect Release 4.
Appendix E [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Related documentation MPI: A Message-Passing Interface Standard The Message Passing Interface Forum, Version 1.1, June 12, 1995, Message Passing Interface Forum, http://www.mpi-forum.org MPI: The complete Reference: Volume 1, The MPI Core Marc Snir, Steve W. Otto, Steven Huss-Lederman, David W. Walker, Jack Dongarra. 2e, 1998, The MIT Press, http://www.mitpress.
Section: Scali MPI Connect Release 4.
List of figures 1-1 2-1 2-2 2-3 2-4 3-1 A cluster system ................................................................................................5 The way from application startup to execution ..................................................... 11 Scali MPI Connect relies on DAT to interface to a number of interconnects ............... 12 Thresholds for different communication protocol ..................................................
Section: Scali MPI Connect Release 4.
Index B Benchmarking ScaMPI........................................................................................................48 C Communication protocols in ScaMPI ...................................................................................16 Eagerbuffering protocol ................................................................................................17 Inlining protocol ...........................................................................................................
SCAMPI_INSTALL_SIGSEGV_HANDLER, builtin SIGSEGV handler................................. 30, 55 SCAMPI_NODENAME, set hostname ...................................................................................54 SCAMPI_TIMING, builtin timing-facility ...............................................................................41 SCAMPI_TRACE, builtin trace-facility ..................................................................................38 SCAMPI_WORKING_DIRECTORY, set working directory .........