| OCR Text |
Show 40 pressure update will happen approximately every one millisecond between neighbors. The other alternative is to build the load manager with an off-the-shelf single chip microcomputer. These components normally have only one or two communication channels, which is far fewer than what is required by the gradient model. A way to circumvent this problem is to sample signals directly with microprocessor software. Consequently, the channel speed is reduced. The likely speed with available microcomputer chip is about 2400 or 1200 bits per second per channel, which takes four or eight milliseconds to perform an update respectively. External serial communication devices may be added as a front end to the single chip controller to increase the updating speed. 2.4.7 Discussion 2.4.7.1 Group address. There are cases where a task has to be evaluated by a specific processor or by any one member of a group. An apply packet with a specific destination address can not be load balanced, since it cannot be migrated. These apply packets can be routed in the same way as a point-topoint data packet. An apply packet may be designated to run on certain type of processors. This class of application can easily be accommodated by the gradient model via a group addressing scheme. An apply packet with a group address as the destination is treated as a regular task. When a task is absorbed by a processor, the node verifies the group identification with the destination address. The task is r:einjected into the system if the address mismatches. This try and reinjection method is costly if the size of a group is relatively small. The approach becomes more attractive as the size of the group grows. One example of using this technique is in a system with interleaving floating point processors. A compute-bound task may designate the group address for floating point processors as the destination. Since the floating point processors are interleaved with regular processors, the overhead of absorption and reinjection could be minimal. |