You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Work has begun in #219 to be able to remove pieces from TorrentStorage when they have been read.
But there's still an issue when removing them as they keep being downloaded over and over.
I think it would be nice to add a new method in TorrentStorage, piece_has_been_downloaded(piece_id) to check if a piece has already been downloaded in the session context, and optionality add another method reset_pieces_state(from_piece_id) that would be called when seeking from torrent to reset all pieces after this piece_id and allowing the Session to re-download these pieces (as we obviously have to download these pieces again).
What do you think about it ?
I provide you a custom storage implementation here that remove pieces when read :
use std::{collections::HashMap, path::Path};use anyhow::Context;use librqbit::{storage::TorrentStorage,FileInfos,ManagedTorrentShared};use librqbit_core::lengths::{Lengths,ValidPieceIndex};use parking_lot::RwLock;usecrate::in_memory_piece::InMemoryPiece;pubstructInMemoryPiece{pubcontent:Box<[u8]>,pubhas_been_validated:bool,}implInMemoryPiece{pubfnnew(l:&Lengths) -> Self{let v = vec![0; l.default_piece_length()asusize].into_boxed_slice();Self{content: v,has_been_validated:false,}}pubfncan_be_discard(&self,upper_bound_offset:usize) -> bool{self.has_been_validated && upper_bound_offset >= self.content.len()}}pubstructInMemoryStorage{lengths:Lengths,file_infos:FileInfos,map:RwLock<HashMap<ValidPieceIndex,InMemoryPiece>>,max_ram_size_per_torrent:usize,}implInMemoryStorage{pubfnnew(lengths:Lengths,file_infos:FileInfos,max_ram_size_per_torrent:usize,) -> anyhow::Result<Self>{// Max memory 128MiB. Make it tunablelet max_pieces = 128*1024*1024 / lengths.default_piece_length();if max_pieces == 0{
anyhow::bail!("pieces too large");}Ok(Self{
lengths,
file_infos,map:RwLock::new(HashMap::new()),
max_ram_size_per_torrent,})}}implTorrentStorageforInMemoryStorage{fnpread_exact(&self,file_id:usize,offset:u64,buf:&mut[u8]) -> anyhow::Result<()>{// log::debug!("pread_exact {file_id} {offset}");let fi = &self.file_infos[file_id];let abs_offset = fi.offset_in_torrent + offset;let piece_id:u32 = (abs_offset / self.lengths.default_piece_length()asu64).try_into()?;let piece_offset:usize =
(abs_offset % self.lengths.default_piece_length()asu64).try_into()?;let piece_id = self.lengths.validate_piece_index(piece_id).context("bug")?;
log::debug!("[READ] piece_id={piece_id}; piece_offset={piece_offset}");letmut g = self.map.write();// Get and remove this data from buffer to free spacelet inmp = g.get(&piece_id).context("piece expired")?;let upper_bound_offset = piece_offset + buf.len();
buf.copy_from_slice(&inmp.content[piece_offset..upper_bound_offset]);if inmp.can_be_discard(upper_bound_offset){
log::info!("Can discard {piece_id}...");let _ = g.remove(&piece_id);}Ok(())}fnpwrite_all(&self,file_id:usize,offset:u64,buf:&[u8]) -> anyhow::Result<()>{// log::debug!("pwrite_all {file_id} {offset}");let fi = &self.file_infos[file_id];let abs_offset = fi.offset_in_torrent + offset;let piece_id:u32 = (abs_offset / self.lengths.default_piece_length()asu64).try_into()?;let piece_offset:usize =
(abs_offset % self.lengths.default_piece_length()asu64).try_into()?;let piece_id = self.lengths.validate_piece_index(piece_id).context("bug")?;
log::debug!("[WRITE] piece_id={piece_id}; piece_offset={piece_offset}");letmut g = self.map.write();let inmp = g
.entry(piece_id).or_insert_with(|| InMemoryPiece::new(&self.lengths));
inmp.content[piece_offset..(piece_offset + buf.len())].copy_from_slice(buf);Ok(())}fnremove_file(&self,_file_id:usize,_filename:&Path) -> anyhow::Result<()>{// log::debug!("remove_file {file_id} {filename:?}");Ok(())}fnensure_file_length(&self,_file_id:usize,_length:u64) -> anyhow::Result<()>{// log::debug!("ensure {file_id} {length}");Ok(())}fntake(&self) -> anyhow::Result<Box<dynTorrentStorage>>{let map = {letmut g = self.map.write();letmut repl = HashMap::new();
std::mem::swap(&mut*g,&mut repl);
repl
};Ok(Box::new(Self{lengths:self.lengths,map:RwLock::new(map),file_infos:self.file_infos.clone(),max_ram_size_per_torrent:self.max_ram_size_per_torrent,}))}fninit(&mutself,_meta:&ManagedTorrentShared) -> anyhow::Result<()>{// log::debug!("init {:?}", meta.file_infos);Ok(())}fnremove_directory_if_empty(&self,_path:&Path) -> anyhow::Result<()>{// log::debug!("remove dir {path:?}");Ok(())}fnon_piece_completed(&self,piece_id:ValidPieceIndex) -> anyhow::Result<()>{letmut g = self.map.write();let inmp = g.get_mut(&piece_id).context("piece does not exist")?;
inmp.has_been_validated = true;Ok(())}}
The text was updated successfully, but these errors were encountered:
I don't like that idea, it's too hacky and too specific for the hacky use-case, doesn't generalize.
To support streaming and deleting files you need at least:
the stream to control when the pieces are deleted (not torrent storage). I.e. when the stream returned it, and there's no other streams that need that piece, delete it
the torrent storage to be able to actually delete the pieces, which implies access to live torrent state (you need to set the values in chunk tracker). Having "on_piece_completed" isn't enough, you can't delete it as there's no guarantee the stream won't need it
the "reserve_next_needed_piece" to do nothing (it does it already if "selected_files" is an empty list)
So I don't think torrent storage in its current form would fit that, and also your changes aren't enough and I don't think they are the right way to do it.
If you want to do that I want to see that in one PR stack working together with e.g. an example memory storage. It doesn't sound hard.
Work has begun in #219 to be able to remove pieces from
TorrentStorage
when they have been read.But there's still an issue when removing them as they keep being downloaded over and over.
I think it would be nice to add a new method in
TorrentStorage
,piece_has_been_downloaded(piece_id)
to check if a piece has already been downloaded in the session context, and optionality add another methodreset_pieces_state(from_piece_id)
that would be called when seeking from torrent to reset all pieces after this piece_id and allowing the Session to re-download these pieces (as we obviously have to download these pieces again).What do you think about it ?
I provide you a custom storage implementation here that remove pieces when read :
The text was updated successfully, but these errors were encountered: