Parallel Maximum Satisfiability Solver

Aitong (Ivy) Li  &  Ananya Aggarwal  |  aitongl.github.io

Goals and Deliverables

Core Goals

Extended Goals

Half-Week Breakdown

Apr 13–16 (Responsible: Ivy)

Apr 17–19 (Responsible: Ananya)

Apr 20–23 (Responsible: Ivy)

Apr 24–26 (Responsible: Ivy)

Apr 27–30

Work Completed So Far

We started off by understanding how the Glucose library works. We first identified the relevant sections of the library for us and then worked on using it to implement a purely sequential MaxSAT solver. It iterates B = 0, 1, 2, ... and for each B, encode the constraint that sum(relax_vars) ≤ B and calls Glucose's solver with this assumption. Learned clauses are reused across iterations, giving the sequential solver a significant advantage over solving from scratch each time.

In addition, we also experimented with a naive parallel version that splits up the search space by partitioning the minimum number of soft clause violations. After some calibrations and testing, we decided to instead use a dynamic work queue approach that assigns the next unclaimed bound B to whichever thread finishes first, avoiding the load imbalance of static partitioning. Threads share only the current best cost atomically, allowing early termination when another thread finds a better solution.

How We're Doing with Respect to Proposal

We are doing well with respect to the core goals proposed in our project proposal. We have the first iteration of our algorithm working. However, it is not performing well across multiple cores which is a cause for concern for us. Thus we suspect this will slow down our future progress, possibly hindering our ability to expand to multiple communication protocols and other extended goals.

We still believe we will be able to achieve all our core goals as outlined in our half week breakdown.

Preliminary Results

Sequential Baseline

BenchmarkCostSolve Time (s)
close_solutions-teams16_l6a140.639
close_solutions-teams20_l5a163.778
xai-mindset2-zoo200.027
c-inference-pre-processing00.000
causal-discovery266.286

Parallel Version — Solve Time by Thread Count (s)

BenchmarkCosts1T2T4T8T
close_solutions-teams16_l6a140.5820.3601.0171.570
close_solutions-teams20_l5a163.7524.0896.1787.797
xai-mindset2-zoo200.0240.0260.0230.033
c-inference-pre-processing00.0000.0010.0010.003
causal-discovery266.2504.8233.5835.146

These are some of the times that were collected for the current two approaches that we implemented. We hope to come up with more appropriate benchmarks that give away for more parallelism and reevaluate the two approaches with respect to those tests.

Goals for Poster Session

Display

We hope to provide graphs for speedups with respect to different threads during the final poster session. Currently, we have been timing our code similar to how it has been done in the previous assignments for a select few test cases.

Concerns