语言选择:
免费网上英汉字典|3Dict

parallel processing

资料来源 : WordNet®

parallel processing
     n : simultaneous processing by two or more processing units
         [syn: {multiprocessing}]

资料来源 : Free On-Line Dictionary of Computing

parallel processing
     
         The simultaneous use of more than one computer to
        solve a problem.  There are many different kinds of parallel
        computer (or "parallel processor").  They are distinguished by
        the kind of interconnection between processors (known as
        "processing elements" or PEs) and between processors and
        memory.  {Flynn's taxonomy} also classifies parallel (and
        serial) computers according to whether all processors execute
        the same instructions at the same time ("{single
        instruction/multiple data}" - SIMD) or each processor executes
        different instructions ("{multiple instruction/multiple data}"
        - MIMD).
     
        The processors may either communicate in order to be able to
        cooperate in solving a problem or they may run completely
        independently, possibly under the control of another processor
        which distributes work to the others and collects results from
        them (a "{processor farm}").  The difficulty of cooperative
        problem solving is aptly demonstrated by the following dubious
        reasoning:
     
        	If it takes one man one minute to dig a post-hole
        	then sixty men can dig it in one second.
     
        {Amdahl's Law} states this more formally.
     
        Processors communicate via some kind of network or bus or a
        combination of both.  Memory may be either {shared memory}
        (all processors have equal access to all memory) or private
        (each processor has its own memory - "{distributed memory}")
        or a combination of both.
     
        A huge number of software systems have been designed for
        programming parallel computers, both at the {operating system}
        and programming language level.  These systems must provide
        mechanisms for partitioning the overall problem into separate
        tasks and allocating tasks to processors.  Such mechanisms may
        provide either {implicit parallelism} - the system (the
        {compiler} or some other program) partitions the problem and
        allocates tasks to processors automatically or {explicit
        parallelism} where the programmer must annotate his program to
        show how it is to be partitioned.  It is also usual to provide
        synchronisation primitives such as {semaphore}s and {monitor}s
        to allow processes to share resources without conflict.
     
        {Load balancing} attempts to keep all processors busy by
        moving tasks from heavily loaded processors to less loaded
        ones.
     
        Communication between tasks may be either via {shared memory}
        or {message passing}.  Either may be implemented in terms of
        the other and in fact, at the lowest level, shared memory uses
        message passing since the address and data signals which flow
        between processor and memory may be considered as messages.
     
        See also {cellular automaton}.
     
        {Usenet} newsgroup: {news:comp.parallel}.
     
        {Institutions (http://www.ccsf.caltech.edu/other_sites.html)},
        {research groups
        (http://www.cs.cmu.edu/~scandal/research-groups.html)}.
     
        (1996-04-23)
依字母排序 : A B C D E F G H I J K L M N O P Q R S T U V W X Y Z