We started off by understanding how the Glucose library works. We first identified the relevant sections of the library for us and then worked on using it to implement a purely sequential MaxSAT solver. It iterates B = 0, 1, 2, ... and for each B, encode the constraint that sum(relax_vars) ≤ B and calls Glucose's solver with this assumption. Learned clauses are reused across iterations, giving the sequential solver a significant advantage over solving from scratch each time.
In addition, we also experimented with a naive parallel version that splits up the search space by partitioning the minimum number of soft clause violations. After some calibrations and testing, we decided to instead use a dynamic work queue approach that assigns the next unclaimed bound B to whichever thread finishes first, avoiding the load imbalance of static partitioning. Threads share only the current best cost atomically, allowing early termination when another thread finds a better solution.
We are doing well with respect to the core goals proposed in our project proposal. We have the first iteration of our algorithm working. However, it is not performing well across multiple cores which is a cause for concern for us. Thus we suspect this will slow down our future progress, possibly hindering our ability to expand to multiple communication protocols and other extended goals.
We still believe we will be able to achieve all our core goals as outlined in our half week breakdown.
| Benchmark | Cost | Solve Time (s) |
|---|---|---|
| close_solutions-teams16_l6a | 14 | 0.639 |
| close_solutions-teams20_l5a | 16 | 3.778 |
| xai-mindset2-zoo | 20 | 0.027 |
| c-inference-pre-processing | 0 | 0.000 |
| causal-discovery | 26 | 6.286 |
| Benchmark | Costs | 1T | 2T | 4T | 8T |
|---|---|---|---|---|---|
| close_solutions-teams16_l6a | 14 | 0.582 | 0.360 | 1.017 | 1.570 |
| close_solutions-teams20_l5a | 16 | 3.752 | 4.089 | 6.178 | 7.797 |
| xai-mindset2-zoo | 20 | 0.024 | 0.026 | 0.023 | 0.033 |
| c-inference-pre-processing | 0 | 0.000 | 0.001 | 0.001 | 0.003 |
| causal-discovery | 26 | 6.250 | 4.823 | 3.583 | 5.146 |
These are some of the times that were collected for the current two approaches that we implemented. We hope to come up with more appropriate benchmarks that give away for more parallelism and reevaluate the two approaches with respect to those tests.
We hope to provide graphs for speedups with respect to different threads during the final poster session. Currently, we have been timing our code similar to how it has been done in the previous assignments for a select few test cases.