项目数据库为MongoDB,分片DB,每个片为一个复制集,每复制集三台DB,
现在需给其中一个复制集增加一台DB,先把用到的资料贴上来,等有时间了再
整理:
Add Members to a Replica Set
--Production Notes
If
you have a backup or snapshot of an existing member, you can move the
data files (i.e. /data/db or dbpath) to a new system and use them to
quickly initiate a new member. These files must be:
1. clean:
the existing dataset must be from a consistent copy of the database
from a member of the same replica set. See the Backup and Restoration
Strategies document for more information.
http://docs.mongodb.org/manual/administration/backups/
2. recent: the copy must more recent than the oldest operation in the
primary member’s oplog. The new secondary must be able to become current
using operations from the primary’s oplog.
-------------------------------------------------------------------------------------------
Creating a slave from an existing master's disk image
If you can stop write operations to the master for an indefinite
period, you can copy the data files from the master to the new slave,
and then start the slave with --fastsync
.
|
Be careful with --fastsync. If the data is not perfectly in sync, a discrepancy will exist forever. |
--fastsync
is a way to start a slave starting with an
existing master disk image/backup. This option declares that the
adminstrator guarantees the image is correct and completely up to date
with that of the master. If you have a full and complete copy of data
from a master you can use this option to avoid a full synchronization
upon starting the slave.
/////////////////////////////
Would like to get documentation for the --fastsync feature. My hope is
the ability to make raw file system copies to seed slaves, then tell the
slave where to "pick up" reads from the oplog. This would make
deploying slaves much faster than performing a initial sync, especially
when there is a slow connection between master/slave (i.e. across data
centers).
//////////////////////////////
Yes. --fastsync is a way to speed up the sync when you have a recent
copy of all the data and oplog
On Feb 24, 3:06Âpm, tetlika <tetl...@xxxxxxxxx> wrote:
> ah ok
>
> I think i understood: fast resync is used just when we have a copy of
> data - including oplogs - it just tells not to do a full resync
>
> when we dont use fastresync all data will be synced, not depending on
> oplog
>
> On 25 ÑÐÐ, 00:55, sridhar <srid...@xxxxxxxxx> wrote:
>
>
>
>
>
>
>
> > fastsync does not replay all the oplog. It only replays the necessary
> > entries post where your database is at. If your oplog is not big
> > enough and has rolled over, fastsync falls back to a full resync.
>
> > On Feb 24, 2:49Âpm, tetlika <tetl...@xxxxxxxxx> wrote:
>
> > > Hi!
>
> > > According to
> > > thehttp://www.mongodb.org/display/DOCS/Adding+a+New+Set+Member
> > > fastsync is just "replaying" ALL Âoplog on the new slave, so if we
> > > dont have the oplog Âbig enough - we need copy data to the new slave
> > > and run it with the fastsync option?
Creating a slave from an existing slave's disk image
You can just copy the other slave's data file snapshot without any
special options. Note data snapshots should only be taken when a mongod
process is down or in fsync-and-lock state.
Sharded Cluster and Replica Set Considerations
The underlying architecture of sharded clusters
and replica sets
presents several
challenges for creating backups. This section describes how to make
quality backups in environments with these configurations and how to
perform restorations.
Back Up Sharded Clusters
Sharding complicates backup operations, because it is impossible to
create a backup of a single moment in time from a distributed cluster
of systems and processes.
Depending on the size of your data, you can back up the cluster as a
whole or back up each mongod
instance. The following
section describes both procedures.
Back Up the Cluster as a Whole Using mongodump
If your sharded cluster
comprises a small collection of data,
you can connect to a mongos
and issue the
mongodump
command. You can use this approach if the following
is true:
- It’s possible to store the entire backup on one system or on a single
storage device. Consider both backups of entire instances and
incremental dumps of data.
- The state of the database at the beginning of the operation is
not significantly different than the state of the database at the
end of the backup. If the backup operation cannot capture a backup,
this is not a viable option.
- The backup can run and complete without affecting the performance of
the cluster.
Note
If you use mongodump
without specifying the a database or
collection, the output will contain both the collection data and the
sharding config metadata from the config servers
.
You cannot use the --oplog
option for
mongodump
when dumping from a mongos
. This option is only
available when running directly against a replica set
member.
Back Up from All Database Instances
If your sharded cluster
is to large for the mongodump
command, then you must back up your data either by creating a snapshot of the cluster
or by creating a binary dump of each database. This section describes both.
In both cases:
- The backups must capture the database in a consistent state.
- The sharded cluster must be consistent in itself.
This procedure describes both approaches:
-
Disable the balancer
process that equalizes the
distribution of data among the shards
. To disable
the balancer, use the sh.stopBalancer()
method in the
mongo
shell, and see the
Disable the Balancer
procedure.
Warning
It is essential that you stop the balancer before creating
backups. If the balancer remains active, your resulting backups
could have duplicate data or miss some data, as chunks
migrate while recording backups.
-
Lock one member of each replica set in shard so that your backups reflect your
entire database system at a single point in time. Lock all shards
in as short of an interval as possible.
To lock or freeze a sharded cluster, you must:
- use the db.fsyncLock()
method in the mongo
shell connected to each shard mongod
instance and
block write operations.
- Shutdown one of the config servers
, to
prevent all metadata changes during the backup process.
-
Use mongodump
to backup one of the config servers
. This backs up the cluster’s metadata. You
only need to back up one config server, as they all have replicas of
the same information.
Issue this command against one of the config server itself or the
mongos
:
-
Back up the replica set members of the shards that you locked. You may back up
shards one at a time or in parallel. For each shard, do one of the
following:
-
Unlock all locked replica set members of each shard using the
db.fsyncUnlock()
method in the mongo
shell.
-
Restore the balancer with the sh.startBalancer()
method
according to the Disable the Balancer
procedure.
Use the following command sequence when connected to the
mongos
with the mongo
shell:
use
config
sh
.
startBalancer
()
Schedule Automated Backups
If you have an automated backup schedule, you can disable all
balancing operations for a period of time. For instance, consider the
following command:
use
config
db
.
settings
.
update
(
{
_id
:
"balancer"
},
{
$set
:
{
activeWindow
:
{
start
:
"6:00"
,
stop
:
"23:00"
}
}
},
true
)
This operation configures the balancer to run between 6:00 am and
11:00pm, server time. Schedule your backup operation to run and
complete
in this time. Ensure that the backup can complete during the
window when the balancer is running and
that the balancer can
effectively balance the collection among the shards in the window
allotted to each.
Restore Sharded Clusters
-
Stop all mongod
and mongos
processes.
-
If shard hostnames have changed, you must manually update the
shards
collection in the Config Database Contents
to use the new
hostnames. Do the following:
-
Start the three config servers
by
issuing commands similar to the following, using values appropriate
to your configuration:
mongod --configsvr --dbpath /data/configdb --port 27018
-
Restore the Config Database Contents
on each config server.
-
Start one mongos
instance.
-
Update the Config Database Contents
collection named shards
to reflect the
new hostnames.
-
Restore the following:
- Data files for each server in each shard
. Because replica
sets provide each production shard, restore all the members of
the replica set or use the other standard approaches for
restoring a replica set from backup.
- Data files for each config server
,
if you have not already done so in the previous step.
-
Restart the all the mongos
instances.
-
Restart all the mongod
instances.
-
Connect to a mongos
instance from a mongo
shell
and run use the db.printShardingStatus()
method to ensure
that the cluster is operational, as follows:
db
.
printShardingStatus
()
show
collections
Restore a Single Shard
Always restore sharded clusters
as a whole. When you restore a single shard, keep in mind that the
balancer
process might have moved chunks
onto or
off of this shard since the last backup. If that’s the case, you must
manually move those chunks, as described in this procedure.
- Restore the shard.
- For all chunks that migrated away from this shard, you need not do
anything. You do not need to delete these documents from the shard
because the chunks are automatically filtered out from queries by
mongos
.
- For chunks that migrated to this shard since the last backup,
you must manually recover the chunks. To determine what chunks have
moved, view the changelog
collection in the Config Database Contents
.
Replica Sets
In most cases, backing up data stored in a replica set
is
similar to backing up data stored in a single instance. It’s possible to
lock a single secondary
or slave
database and then
create a backup from that instance. When you unlock the database, the secondary or
slave will catch up with the primary
or master
. You may also
chose to deploy a dedicated hidden member
for backup purposes.
If you have a sharded cluster
where each shard
is itself a replica
set, you can use this method to create a backup of the entire cluster
without disrupting the operation of the node. In these situations you
should still turn off the balancer when you create backups.
For any cluster, using a non-primary/non-master node to create backups is
particularly advantageous in that the backup operation does not
affect the performance of the primary or master. Replication
itself provides some measure of redundancy. Nevertheless, keeping
point-in time backups of your cluster to provide for disaster recovery
and as an additional layer of protection is crucial.
分享到:
相关推荐
Node.js线上服务器部署与发布,阿里云部署Node.js+MongoDB的应用项目,跨越成功的最后一步。百度云盘下载地址。百度云盘下载地址。百度云盘下载地址。
MongoDB文件服务器(基于MongoDB的文件服务器) MongoDB文件服务器是基于MongoDB的文件服务器系统。 MongoDB File Server致力于存储小文件,例如博客中的图片,普通文档等。 它使用了一些非常流行的技术,例如: ...
由第一节点mongodb节点规划图所示:在服务器31,即node1上安装mongos、shard1、shard4、shard5mkdir -p /opt/app
毕业设计-使用node,express,mongoDB制作服务器
使用node,express,mongoDB制作服务器.zip
mongoDB安装图解,步骤,解决方案,cmd运行,服务器启动步骤
Express + MongoDB服务器的简单节点 一种使用带有Node.js的MongoDB的Express服务器入门的简单方法。 产品特点表达REST API MongoDB要求安装git clone git@github....
mongodb集群分片操作以及增加acl,有什么问题联系我!
mongodb副本集加分片集群安全认证使用账号密码登录
下载文件后,利用gradle进行文件编译下载,启动成功后输入localhost:8080即可
MongoDb-数据提取 从 mongodb 获取所有节点并将其放入 JSON 用法 Main.py [location]确保数据库正在运行。 如果数据库连接检查manager/MongoDbManager有问题,更改URI
Jest-Mongodb 开玩笑预设以运行MongoDB内存服务器用法0.安装$ yarn add @shelf/jest-mongodb --dev确保mongodb也已安装在项目中,这是对等依赖项所必需的。1.创建jest.config.js module . exports = { preset : '@...
Spring启动mongodb文件服务器 Spring启动mongodb文件服务器
在创建mongodb的replica set的时候,只是做成了1主2从,没有做成1主1从1仲裁。这我们将一个几点从replica set中删除,再以仲裁节点的身份加入到replica set中: 1.初始状态: shard1ReplSet:PRIMARY> rs.status();...
主要介绍了Mongodb增加、移除Arbiter节点实例,Arbiter是搭建Mongodb集群的一个必备节点,需要的朋友可以参考下
GraphQL MongoDB服务器 使用GraphQL和MongoDB的服务器样板。 介绍 这是使用GraphQL和MongoDB的服务器样板。 使用GraphQL Yoga支持订阅。 入门 使用https://github.com/leonardomso/graphql-mongodb-server.git克隆此...
Cent OS安装 + MongoDB安装部署(单节点+副本集+分片) 详细教程之中所要用到的所有软件都有
MongoDB 安装包 ,包含了单服务器的副本集(Replication)配置(单服务器:Windows)
套接字使用socketIO和mongoDB进行React和节点应用