Skip to content

Speedup with GPU #42

Closed Answered by ahmedfgad
asishadhikari asked this question in Q&A
May 14, 2021 · 2 comments · 5 replies
Discussion options

You must be logged in to vote

I did not try using a GPU with PyGAD. I tried to parallelize the processing and did not find any speedup in the execution time. Badly, the time increased compared to not using parallel processing. The reason is that there are no single long-running operation in the genetic algorithm. For example, parent selection, mutation, and crossover use few CPU time.

The only thing that would make a change is the fitness function. As the fitness function changes for each problem, it may or may not need parallel processing. Check this article for an example where the fitness function is parallelized: https://hackernoon.com/how-genetic-algorithms-can-compete-with-gradient-descent-and-backprop-9m9t33bq

Replies: 2 comments 5 replies

Comment options

You must be logged in to vote
2 replies
@asishadhikari
Comment options

@ahmedfgad
Comment options

Answer selected by asishadhikari
Comment options

You must be logged in to vote
3 replies
@ahmedfgad
Comment options

@keithreid-sfw
Comment options

@ahmedfgad
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants