A Practical Guide to Reclaim Disk Space from MongoDB

      ☕ 4 min read

Dedicated to Ops who are weighed down with worry about high-disk usage of MongoDB.

This post applies to MongoDB 3.2 or better and all following operations run on Mongo Shell. You need to confirm your Mongo’s engine is WiredTiger:

1
2
db.serverStatus().storageEngine
// expect output: { "name" : "wiredTiger" }

Where to start

Find the main concern, use show dbs to locate the biggest databases, and then locate the biggest collections after use your_biggest_db:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
// You can use the following function multiple times
// during current session.
function CollectionSizes(collectionNames) {
  let stats = []
  collectionNames.forEach(function (n) {
    stats.push(db[n].stats(1024 * 1024 * 1024))  // show size in GB
  })
  stats = stats.sort(function (a, b) {
    return b['size'] - a['size']
  })

  print(`name: DB size in GB, disk size in GB`)
  for (let c of stats) {
    print(`${c['ns']}:  ${c['size']} (${c['storageSize']})`)
  }
}

CollectionSizes(db.getCollectionNames())

Reclaim Disk Space

There are mainly two ways to go: MongoDB compact and move/delete data.

During reclaiming process, you need to pay attention to the extra IO stress brought by reclaiming. High IO stress enlarges database read/write latency and may make MongoDB slaves out of sync. You may monitor the sync health using following script on any node in the cluster:

1
2
3
4
5
6
7
// When querying from slave node, firstly run
// rs.slaveOk();

// query for sync health
rs.status().members.map((x) => {
  return {name: x.name, stateStr: x.stateStr, optimeDate: x.optimeDate, syncingTo: x.syncingTo}
})

MongoDB Compact

MongoDB seldom returns disk space to operating system during deleting, these disk pages are marked dirty and reserved for writing in the future. Compact command lets WiredTiger return these space. (like PostgreSQL’s Vacuum)

Compact needs to be executed on every node in the cluster.

Compact locks the whole database and the sync also stops if you are running it on a slave node. When you are running a single-node MongoDB, you should execute compact during scheduled maintenance time.

However, in MongoDB cluster, you can do rolling compact which brings zero system down time. Start with slave nodes(don’t forget rs.slaveOk()), and finally run rs.stepDown() in master node then compact.

The following script compacts all collections in all non-system databases of your MongoDB, and it sleeps between long-running compacts to reduce OpLog sync lagging:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
function compactDB(dbName) {
  if ('local' !== dbName && 'admin' !== dbName && 'system' !== dbName) {
    let subject = db.getSiblingDB(dbName)
    subject.getCollectionNames().forEach(function (collectionName) {
      let taskName = dbName + ' - ' + collectionName
      let startAt = new Date()
      print('compacting: ' + taskName)
      subject.runCommand({compact: collectionName})
      let elapsed = ((new Date()) - startAt) / 1000
      print(taskName + ', finished in ' + elapsed + ' second(s).')
      if (elapsed > 30) {
        print('sleep a while for OpLog sync...')
        sleep(8000)
      }
    })
  }
}

db.getMongo().getDBNames().forEach(compactDB)

Move or Delete Data

Moving and deleting data requires business feasibility assessment.

Moving and deleting should be executed before compact.

When deleting aged data, if the accesses to the collection can be held temporarily, we can swap the old collection with a newly created one. Dropping the whole collection is much faster than deleting records, which brings a lot more IO stress. Furthermore, the disk space taken by the old collection will be returned to operating system immediately.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// Move data in need to the new collection
db.big_table.aggregate([{$match: {timestamp: {$gt: some_time}}}, {$out: 'big_table_left'}])
// create the same indexes in the new collection
db.big_table_left.createIndex({timestamp: -1}, {background: 1})
// rename the old collection to another name
db.big_table.renameCollection('big_table_legacy')
// rename the new collection to the old name
db.big_table_left.renameCollection('big_table')
// drop the old collection after make sure everything is ok
db.big_table_legacy.drop()

If it is not feasible to hold access to collection, we have only to delete data one by one. Note sleeping between every deleteMany reduces overall IO stress.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
let count = 0
let toDelete = []
db.big_table.find({timestamp: {$gt: some_time}}, {_id: 1}).forEach(function (item) {
  count++
  toDelete.push(item._id)

  if (count % 10000 === 0) {
    print(`progress: ${count}`)
    db.big_table.deleteMany({_id: {$in: toDelete}})
    toDelete.splice(0, toDelete.length)
    // sleep a while to ease the stress on IO
    sleep(100)
  }
})

Now the work is done, let’s get a drink!

References

Share on

nanmu42
WRITTEN BY
nanmu42
To build beautiful things beautifully.

What's on this Page