Saturday, February 25, 2012

Managing Database Schema Changes

I hope this is the correct news group to post this to, if not could someone
please advice me as to a better choice. We are currently releasing V1 of
our product which will include a SQL Server DB. After this point when we
want to provide Service Packs/Rollups/Upgrades/whatever to our clients we
are going to need to manage database changes in some fashion to ensure we
keep the database in sync with the App. I have seen people use source safe
for this, just keeping a script for each change and labeling them so they
get pulled out during the build process. This solution seemed to work well
enough, but I was wondering if anyone in this group had any other
suggestions. As always any advice that can be provided would be greatly
appreciated.
Thank you,
Prescott
"J Prescott Sanders" <nomorespam@.spam.com> schrieb im Newsbeitrag
news:uIemf2GVEHA.3656@.TK2MSFTNGP11.phx.gbl...
> I hope this is the correct news group to post this to, if not could
someone
> please advice me as to a better choice. We are currently releasing V1
of
> our product which will include a SQL Server DB. After this point when
we
> want to provide Service Packs/Rollups/Upgrades/whatever to our clients
we
> are going to need to manage database changes in some fashion to ensure
we
> keep the database in sync with the App. I have seen people use source
safe
> for this, just keeping a script for each change and labeling them so
they
> get pulled out during the build process. This solution seemed to work
well
> enough, but I was wondering if anyone in this group had any other
> suggestions. As always any advice that can be provided would be greatly
> appreciated.
This is a difficult topic: especially if you have to create or change
indexes on an existing database with significant amount of data. This can
easily take very long effectively stopping your application.
Since schema changes (DDL) do not happen within transaction it can be a
problem to deal with failure of schema changing statements. This is
especially true if you not only want to change the schema but also modify
data as part of a schema migration from one version to the next.
IMHO source code control is not sufficient since you'll have to take into
consideration that customers might be upgrading from different schema
versions to the current one. They might have skipped a product version
you released and the next release has to take care of both schema
migration steps.
The ideal solution would be a target schema description and some kind of
tool that compares the actual schema with this target schema and performs
all changes needed for them to be in synch again. Unfortunately I don't
know such a tool but if anybody does know such a tool I'd be glad to get
it to know.
The single golden rule I can give you: plan schema migration carefully and
try to imagine which kind of schema changes you might be doing in the
future. Since you're at the beginning of your migration history, you have
a good change to get it right. Otherwise the whole topic might easily get
out of hand. That of course depends on the nature of your application and
the expected schema changes.
Good luck!
Kind regards
robert
|||Hi Robert,
Try dbMaestro. It's a product that allows comparison, migration and archiving of database schema and data.
this tool can compare the actual schema with this target schema and performs
all changes needed for them to be in synch again.
You can find it here:
http://www.extreme.co.il
"Robert Klemme" wrote:

> "J Prescott Sanders" <nomorespam@.spam.com> schrieb im Newsbeitrag
> news:uIemf2GVEHA.3656@.TK2MSFTNGP11.phx.gbl...
> someone
> of
> we
> we
> we
> safe
> they
> well
> This is a difficult topic: especially if you have to create or change
> indexes on an existing database with significant amount of data. This can
> easily take very long effectively stopping your application.
> Since schema changes (DDL) do not happen within transaction it can be a
> problem to deal with failure of schema changing statements. This is
> especially true if you not only want to change the schema but also modify
> data as part of a schema migration from one version to the next.
> IMHO source code control is not sufficient since you'll have to take into
> consideration that customers might be upgrading from different schema
> versions to the current one. They might have skipped a product version
> you released and the next release has to take care of both schema
> migration steps.
> The ideal solution would be a target schema description and some kind of
> tool that compares the actual schema with this target schema and performs
> all changes needed for them to be in synch again. Unfortunately I don't
> know such a tool but if anybody does know such a tool I'd be glad to get
> it to know.
> The single golden rule I can give you: plan schema migration carefully and
> try to imagine which kind of schema changes you might be doing in the
> future. Since you're at the beginning of your migration history, you have
> a good change to get it right. Otherwise the whole topic might easily get
> out of hand. That of course depends on the nature of your application and
> the expected schema changes.
> Good luck!
> Kind regards
> robert
>
|||Robert,
Thanks for the quick response. It seems you have quite a bit of experience
on this subject. Can you comment at all on the processes you have in place
to manage database changes within your own system. Any information will
certainly be appreciated as I work on a solution for our system.
Thanks in advance.
Prescott
"Robert Klemme" <bob.news@.gmx.net> wrote in message
news:2jdkn2F10ja1kU1@.uni-berlin.de...
> "J Prescott Sanders" <nomorespam@.spam.com> schrieb im Newsbeitrag
> news:uIemf2GVEHA.3656@.TK2MSFTNGP11.phx.gbl...
> someone
> of
> we
> we
> we
> safe
> they
> well
> This is a difficult topic: especially if you have to create or change
> indexes on an existing database with significant amount of data. This can
> easily take very long effectively stopping your application.
> Since schema changes (DDL) do not happen within transaction it can be a
> problem to deal with failure of schema changing statements. This is
> especially true if you not only want to change the schema but also modify
> data as part of a schema migration from one version to the next.
> IMHO source code control is not sufficient since you'll have to take into
> consideration that customers might be upgrading from different schema
> versions to the current one. They might have skipped a product version
> you released and the next release has to take care of both schema
> migration steps.
> The ideal solution would be a target schema description and some kind of
> tool that compares the actual schema with this target schema and performs
> all changes needed for them to be in synch again. Unfortunately I don't
> know such a tool but if anybody does know such a tool I'd be glad to get
> it to know.
> The single golden rule I can give you: plan schema migration carefully and
> try to imagine which kind of schema changes you might be doing in the
> future. Since you're at the beginning of your migration history, you have
> a good change to get it right. Otherwise the whole topic might easily get
> out of hand. That of course depends on the nature of your application and
> the expected schema changes.
> Good luck!
> Kind regards
> robert
>
|||"J Prescott Sanders" <nomorespam@.spam.com> schrieb im Newsbeitrag
news:OmmcrXHVEHA.2692@.TK2MSFTNGP09.phx.gbl...
> Robert,
> Thanks for the quick response. It seems you have quite a bit of
experience
> on this subject. Can you comment at all on the processes you have in
place
> to manage database changes within your own system. Any information will
> certainly be appreciated as I work on a solution for our system.
Well, what we do is basically store a numeric schema version number in
some table and check on startup, whether the schema version of the
software is more recent than the schema version found. If it is, we do
all single upgrade steps that we defined for each version hop.
Since a lot of legacy code is involved, things are a bit difficult
sometimes. I'd say a crucial thing to do is have several phases for
schema upgrade: a read only phase that determines whether preconditions
are satisfied (i.e. whether the schema at hand is in fact the one that it
claims to be). Only if the verification passes successfully, the next
phase is started, which modifies the dataabase. Alternatively you can do
a full backup and restore that if modifications fail.
The other factor that makes things hard for us is that we don't support a
single database.
And there's another gotcha: if you provide error fixing patches for older
versions of the software that included schema changes, then tracking of
schema versions and changes becomes really hard, because you need to
incorporate the change in the newest head version also but you have to
make sure it's not applied twice. Example:
1.0 version of software is released
2.0 version of software is released
2.0 version of software is released
1.1 version of software is released that contains a schema bug fix
Now you'll have to make sure that 3.0 checks on installation whether the
1.1 schema fix was incorporated into the db or not. You get the picture.
Kind regards
robert
[vbcol=seagreen]
> Thanks in advance.
> Prescott
> "Robert Klemme" <bob.news@.gmx.net> wrote in message
> news:2jdkn2F10ja1kU1@.uni-berlin.de...
V1[vbcol=seagreen]
when[vbcol=seagreen]
clients[vbcol=seagreen]
ensure[vbcol=seagreen]
source[vbcol=seagreen]
work[vbcol=seagreen]
greatly[vbcol=seagreen]
can[vbcol=seagreen]
a[vbcol=seagreen]
modify[vbcol=seagreen]
into[vbcol=seagreen]
version[vbcol=seagreen]
of[vbcol=seagreen]
performs[vbcol=seagreen]
don't[vbcol=seagreen]
get[vbcol=seagreen]
and[vbcol=seagreen]
have[vbcol=seagreen]
get[vbcol=seagreen]
and
>

No comments:

Post a Comment