2025-01-09 09:46:36

This commit is contained in:
Isaac Shoebottom 2025-01-09 09:46:36 -04:00
parent e9733709be
commit ddb073a5a3
3 changed files with 28 additions and 3 deletions

View File

@ -1 +1 @@
{"algorithm":{"algorithm":{"currentFile":{"count":1,"lastUpdated":1736360369637}}}}
{"algorithm":{"algorithm":{"currentFile":{"count":1,"lastUpdated":1736360369637}}},"computing":{"computing":{"currentFile":{"count":2,"lastUpdated":1736427522780}}},"tasks":{"tasks":{"currentFile":{"count":1,"lastUpdated":1736428160890}}},"parallelization":{"parallelization":{"currentFile":{"count":1,"lastUpdated":1736428239757}}},"Implicit":{"Implicit":{"currentFile":{"count":1,"lastUpdated":1736429811730}}},"Semi-implicit":{"Semi-implicit":{"currentFile":{"count":1,"lastUpdated":1736429857427}}}}

View File

@ -13,7 +13,7 @@
"state": {
"type": "markdown",
"state": {
"file": "UNB/Year 5/Semester 2/HIST3925/Lecture Notes.md",
"file": "UNB/Year 5/Semester 2/CS4745/Lecture Notes.md",
"mode": "source",
"source": false
},
@ -167,9 +167,11 @@
},
"active": "46a6eee907728856",
"lastOpenFiles": [
"UNB/Year 5/Semester 2/CS4745",
"UNB/Year 5/Semester 2/HIST3925/Lecture Notes.md",
"UNB/Year 5/Semester 2/CS4745/Lecture Notes.md",
"UNB/Year 5/Semester 2/HIST3925",
"UNB/Year 5/Semester 2/CS3383/Lecture Notes.md",
"UNB/Year 5/Semester 2/HIST3925/Lecture Notes.md",
"UNB/Year 5/Semester 2/CS3383",
"UNB/Year 5/Semester 2",
"UNB/Year 5/Semester 1/CS3113/Exam Review.md",

View File

@ -0,0 +1,23 @@
### Models of Parallel Computing
- SISD (x86)
- SIMD (AVX)
- MISD
- MIMD (GPU)
### History of parallel computing
Prior to the 1990s, computers had modules for parallel computation, usually specialized.
Clusters of computers (Beowulf Revolution) connected by a network, which were able to be dispatched commands and do computation together. Grid computing was the idea of connecting "all" computers to a grid, similar to the power grid, and to distribute computing resources among connected peers. Cloud computing is a model of distributing computation resources based on payment to a provider which provides managed computing. This led to the Hadoop file system (2004), which led to computation directly on files, with no need to load all contents of the file into memory. This led to Map Reduce, which was the main framework the file system worked under. GPU computing naturally lends itself to processing linear algebra operations (transformations of pixels and triangles), suitable for massive parallelization of these tasks, even outside of graphics. FPGA devices can be used to parallelized specific/specialized tasks given a logic design that is lends itself to parallelization. Quantum computers are highly parallel accelerators, still in development. The algorithms were known starting in the 1980s, but the hardware is still behind. Multicore systems are designed for improved latency, and they can fulfill general purpose computation, example: CPU with multiple cores. Manycore systems have much higher core count, and they are focused on the throughput (amount of computation performed per unit of time), example: GPU.
### Parallel Programming
Implicit
- MapReduce
- fork-join, Executor services with thread pools
- Allows shifting attention from implementation to task description
Semi-implicit
- Parallel for
- OpenMP
- Allows you to use the pre-defined directives for achieving parallel execution without focusing on how it works
Explicit
- Scatter, Gather
- pthreads
- The developers have the most control on computation, but have to manage and assure there are no problems in results
Compilers are good at designing optimal sequential code, but compiler optimizations may prevent the algorithm from working as expected, as well as instruction reordering impacting results.