-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
supported splitting packages #19
base: master
Are you sure you want to change the base?
Conversation
…repare to adjust the code structure of the writing part in the future
… files under special conditions
This needs tests (ideally without writing gigabytes to disk though) |
Excuse me, may I ask what I need to do? |
@@ -211,22 +234,47 @@ public void Write(Stream stream) | |||
|
|||
writer.Write(NullByte); | |||
|
|||
var fileTreeSize = stream.Position - headerSize; | |||
//clear sub file | |||
for (ushort i = 0; i < 999; i++) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to delete the subcontracted files produced by previous tasks. I believe that when users reduce the maximum number of bytes and recreate the subcontracted files, the existence of the previous subcontracted files can be very confusing for users
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's up to them to clean up then, not really our job to arbitrarily loop for 1k files. We only care that the _dir.vpk references correct chunk file which will be overwritten.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's up to them to clean up then, not really our job to arbitrarily loop for 1k files. We only care that the _dir.vpk references correct chunk file which will be overwritten.
You're right, we shouldn't help users make decisions without authorization
Okay, let me try something. I haven't written anything similar before |
|
||
namespace SteamDatabase.ValvePak | ||
{ | ||
internal sealed class WriteEntry(ushort archiveIndex, uint fileOffset, PackageEntry entry) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is needed. You can calculate the ArchiveIndex directly in AddFile
.
You can look at Valve's packedstore.cpp to see how they handle adding files:
- CPackedStore::AddFile has a bMultiChunk bool.
- They keep track of m_nHighestChunkFileIndex and then increase it if the file offset is higher than m_nWriteChunkSize which defaults to 200 * 1024 * 1024 bytes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds good,I should go take a look at packdstore.cpp,Can you tell me where it is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Search for cstrike15_src
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found it, thank you
const byte NullByte = 0; | ||
|
||
// File tree data | ||
bool isSingleFile = entries.Sum(s => s.TotalLength) + headerSize + 64 <= maxFileBytes; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like using maxFileBytes here, we should just have a bool to specify that we want to multi chunk.
This size calculation is also gonna be incorrect if we want to write file hashes.
We currently have this, but this ideally should be calculated for the chunks:
Ref in valve's code: HashAllChunkFiles |
Actually, I'm not quite sure how to calculate the hash value here. I think I should first take a look at cstrike15_strc |
Now write package is supported splitting packages,You can use it like this
package.Write("writePath",maxPackageBytes);