I'm looking for an efficient breadth first search that is very quick. I have a project where I need to traverse a database hitting every node and maintain the parent child relationship, so I can insert a record for every row in every table in the database. Basically I'm copying a sql server database in code, but I'm removing data and skipping a few tables. I also have to do it this way because I have to use an API and create new unique identifiers for each row in the process.
Any thoughts on how to make this as quick and efficient as possible?
Most efficient(quick) Breadth first search algorithm
Moderator: Coders of Rage
-
- Chaos Rift Junior
- Posts: 200
- Joined: Mon Feb 22, 2010 12:32 am
- Current Project: Breakout clone, Unnamed 2D RPG
- Favorite Gaming Platforms: PC, XBOX360
- Programming Language of Choice: C#
- Location: San Antonio,Texas
- Contact:
- dandymcgee
- ES Beta Backer
- Posts: 4709
- Joined: Tue Apr 29, 2008 3:24 pm
- Current Project: https://github.com/dbechrd/RicoTech
- Favorite Gaming Platforms: NES, Sega Genesis, PS2, PC
- Programming Language of Choice: C
- Location: San Francisco
- Contact:
Re: Most efficient(quick) Breadth first search algorithm
I'm wondering if you could do this faster than exporting, editing the resulting file(s), and importing?
Falco Girgis wrote:It is imperative that I can broadcast my narcissistic commit strings to the Twitter! Tweet Tweet, bitches!
-
- Chaos Rift Junior
- Posts: 200
- Joined: Mon Feb 22, 2010 12:32 am
- Current Project: Breakout clone, Unnamed 2D RPG
- Favorite Gaming Platforms: PC, XBOX360
- Programming Language of Choice: C#
- Location: San Antonio,Texas
- Contact:
Re: Most efficient(quick) Breadth first search algorithm
That would be nice, but we are forced to use the vendors API to
Not void any support agreements.
Not void any support agreements.
-
- Chaos Rift Junior
- Posts: 200
- Joined: Mon Feb 22, 2010 12:32 am
- Current Project: Breakout clone, Unnamed 2D RPG
- Favorite Gaming Platforms: PC, XBOX360
- Programming Language of Choice: C#
- Location: San Antonio,Texas
- Contact:
Re: Most efficient(quick) Breadth first search algorithm
There is an option to use XML. I'm wondering if it wouldn't be better to parse
The XML.
The XML.
- dandymcgee
- ES Beta Backer
- Posts: 4709
- Joined: Tue Apr 29, 2008 3:24 pm
- Current Project: https://github.com/dbechrd/RicoTech
- Favorite Gaming Platforms: NES, Sega Genesis, PS2, PC
- Programming Language of Choice: C
- Location: San Francisco
- Contact:
Re: Most efficient(quick) Breadth first search algorithm
I'm still having trouble trying to imagine what possible scenario would require such a strange use-case.insert a record for every row in every table in the database
Falco Girgis wrote:It is imperative that I can broadcast my narcissistic commit strings to the Twitter! Tweet Tweet, bitches!
-
- Chaos Rift Junior
- Posts: 200
- Joined: Mon Feb 22, 2010 12:32 am
- Current Project: Breakout clone, Unnamed 2D RPG
- Favorite Gaming Platforms: PC, XBOX360
- Programming Language of Choice: C#
- Location: San Antonio,Texas
- Contact:
Re: Most efficient(quick) Breadth first search algorithm
It's something for work. We have a list of Well Files that have to be copied to another database with certain data removed as this is transfering data from one company to another. That wouldn't be too big a deal however, we have to use the API for the appliation that the databases support or we run into issues with those companies not being able to receive support from the vendor.
The API is a major hassle. We don't have a simple "copy well" function, so to copy a well we have to do a beadth first search to maintain parent-child relationships because the application is predicated on those relationships. We have to traverse the whole tree to hit every table in order to copy every row of every table to the other database.
Oh and since when you create a new well file in the second database it creates completely different identifiers, we have to have a lookup table mapping all rows from one database to another.
When we get to copying updated data in these wells I won't have to hit every table, but on the first run...it's going to be a nightmare performance wise.
The API is a major hassle. We don't have a simple "copy well" function, so to copy a well we have to do a beadth first search to maintain parent-child relationships because the application is predicated on those relationships. We have to traverse the whole tree to hit every table in order to copy every row of every table to the other database.
Oh and since when you create a new well file in the second database it creates completely different identifiers, we have to have a lookup table mapping all rows from one database to another.
When we get to copying updated data in these wells I won't have to hit every table, but on the first run...it's going to be a nightmare performance wise.
- dandymcgee
- ES Beta Backer
- Posts: 4709
- Joined: Tue Apr 29, 2008 3:24 pm
- Current Project: https://github.com/dbechrd/RicoTech
- Favorite Gaming Platforms: NES, Sega Genesis, PS2, PC
- Programming Language of Choice: C
- Location: San Francisco
- Contact:
Re: Most efficient(quick) Breadth first search algorithm
Sounds like quite a hack, but I fully understand why you have to do it this way now. Gotta love those support terms, eh? It sounds like your approach is what needs to be done, but as far as your original question about specific breadth-first algorithms I can't tell you any more than you'd learn from a simple Google search. If you're only ever using it once, the first time, I wouldn't worry as much about the efficiency of the code as the efficiency of the time it takes you to write it.mattheweston wrote:It's something for work. We have a list of Well Files that have to be copied to another database with certain data removed as this is transfering data from one company to another. That wouldn't be too big a deal however, we have to use the API for the appliation that the databases support or we run into issues with those companies not being able to receive support from the vendor.
The API is a major hassle. We don't have a simple "copy well" function, so to copy a well we have to do a beadth first search to maintain parent-child relationships because the application is predicated on those relationships. We have to traverse the whole tree to hit every table in order to copy every row of every table to the other database.
Oh and since when you create a new well file in the second database it creates completely different identifiers, we have to have a lookup table mapping all rows from one database to another.
When we get to copying updated data in these wells I won't have to hit every table, but on the first run...it's going to be a nightmare performance wise.
Falco Girgis wrote:It is imperative that I can broadcast my narcissistic commit strings to the Twitter! Tweet Tweet, bitches!
- MarauderIIC
- Respected Programmer
- Posts: 3406
- Joined: Sat Jul 10, 2004 3:05 pm
- Location: Maryland, USA
Re: Most efficient(quick) Breadth first search algorithm
This.dandymcgee wrote:If you're only ever using it once ... I wouldn't worry as much about the efficiency of the code as the efficiency of the time it takes you to write it.
BFS is pretty standard anyways.
If you want to parse XML, that's an option, but you definitely want to use an existing library so that you don't mess it up.
I'm not sure what your dataset looks like... is parallelization an option? Probably not, because there's no processing to be done, really...
I realized the moment I fell into the fissure that the book would not be destroyed as I had planned.