From 3b40d67dbb9d2914f0c17626586a488560aea65f Mon Sep 17 00:00:00 2001 From: GitHub Action Date: Tue, 1 Apr 2025 13:14:47 +0000 Subject: [PATCH] Update bundles --- bundles/dev_logs.json | 29 ++++++++++++++++++++++------- bundles/version.json | 2 +- bundles/wiki.json | 18 +++++++++--------- 3 files changed, 32 insertions(+), 17 deletions(-) diff --git a/bundles/dev_logs.json b/bundles/dev_logs.json index abd6733..a6db46d 100644 --- a/bundles/dev_logs.json +++ b/bundles/dev_logs.json @@ -2,7 +2,7 @@ { "_id": "Architecture Overhaul", "content": "Hey @everyone, here's a small update on what I've been working on lately:\n\nAs the project has grown bigger, it's gotten quite difficult to keep track of and manage a billion different custom formats, quality profiles, etc. To help improve development productivity, I've planned a complete overhaul of Dictionarry's architecture. This starts with separating things into modules - namely a separate database which powers the website and the profilarr tool.\n\nNext up is standardizing the actual entries inside the database. The biggest issue in development right now is making / editing / updating the same thing multiple times. If you have the same regex pattern for multiple CFs, it needs to be updated for each one of them. Quality profiles across different apps have miniscule differences in syntax (eg. web-dl in radarr vs web in sonarr), which means we need multiple files with tiny differences.\n\nWorking in this system is extremely error prone and time consuming. To fix this, I'm creating a standard unique to dictionarry based on a **single definition format**, i.e. Regex patterns, Custom Formats and Quality Profiles are defined once, and repeated in other places using foreign keys. I don't know exactly _how_ this will look, but the plan is simplicity above all. Outside of improving productivity, I hope this standard helps encourage people who feel less confident with custom formats / quality profiles make more intuitive changes to their own setups.\n\nNow, the problem with this new and improved standard is - the arrs won't be able to read the files anymore. Solution: A compiler! This is where the fun begins; we take our simple, easy-to-develop-for files and push them through the compiler. Out pops the required syntax, with those weird naming rules (web-dl for radarr, web for sonarr), without the developer needing to ever worry about it!\n\nHere's a canvas page I made in Obsidian which visualizes this architecture:\n\n![Archiecture Diagram](https://i.imgur.com/HcXFNHU.png)\n\n# Profile Selector\n\nHere's an updated look at the new profile selector (WIP) in action. I'll leave explaining the selection algorithm for another day (because I'm still not quite happy with it), but I think it's still pretty cool to look at as is.\n\n![Selection Algorithm v1](https://streamable.com/bhi7h6)", - "last_modified": "2025-03-26T11:42:47.239284+00:00", + "last_modified": "2025-04-01T13:14:44.324274+00:00", "title": "Architecture Overhaul", "slug": "architecture_overhaul", "author": "santiagosayshey", @@ -15,7 +15,7 @@ { "_id": "Modular Choices", "content": "Hey @everyone, here's a small (but very important) post on the new update system!\n\n## Current Profilarr\n\nCurrently, there is 0 support for updates in Profilarr. This is obviously not ideal; it's a nightmare to keep up to date with changes and almost certainly breaks any custom changes you make.\n\n## Profilarr v1\n\nUsers will be able to view incoming and outgoing changes, as well as resolve any conflicts between the two. To achieve this, a user friendly GUI has been built on top of Git's merge functionality and allows fine control over what should be merged / ignored. More specifically, this functionality allows us to make custom changes and choose to retain them once a new update comes around.\n\n- As an example, let's say you've made the Dolby Vision custom formats negative because your TV doesn't support it. A new update has come out which shuffles around HDR scores, and this leads to a merge conflict between the two custom format scores.\n- In the settings page, you can choose to accept the incoming change or retain your local changes. Profilarr will 'remember' your choice and stop prompting you to update this custom format until a new update comes out, in which case, the situation repeats. Keep local or accept incoming.\n\n### Settings Page\n\nProfilarr now includes a dedicated page for 'Sync Settings'. It allows you to link / unlink a database repository, view and change branches as well as deal with incoming / outgoing changes and their conflicts. This page has been planned for developers too; you can add an authenticated github dev token to your environment and you have the ability to make changes directly to Profilarr's database (not to stable, obviously).\n\n# Beta Release\n\n- Still not quite ready yet, but I'm working hard to get it out! Stay tuned :hearts:\n\nHere's a screenshot of this new Conflict Resolver in action (Ignore the date modified row, it will be removed for actual use)\n\n![Conflict Resolver](https://i.imgur.com/0EZrumU.png)", - "last_modified": "2025-03-26T11:42:47.239284+00:00", + "last_modified": "2025-04-01T13:14:44.324274+00:00", "title": "Modular Choices", "slug": "modular_choices", "author": "santiagosayshey", @@ -26,10 +26,25 @@ "user_choice" ] }, + { + "_id": "Profilarr is in Beta \ud83d\ude80", + "content": "hey @everyone, long awaited dev log :)\n\n## What's New? \ud83d\udc48\n\nMany people are already aware, but I thought I should formally announce here on discord that **Profilarr is out in beta!** I've been working on it since around July last year and put in a massive effort over the Christmas break to get it working. Even though it's not nearly as stable as I would like it to be, it implements the core architecture I first talked about [here](https://dictionarry.dev/devlog/architecture_overhaul). There is still so (x10) much to be done in terms of bugs & polish & new features, but I'm happy sharing it as is. Hopefully you can all find some benefit in using it too :) \n\nYou can read our setup guide [here](https://dictionarry.dev/wiki/profilarr-setup). It's available as a community app on Unraid, and as a Docker image for both ARM (Apple Silicon, Raspberry Pi) and x86.\n### Database \ud83d\udcbe\n\nAlong with Profilarr, the Dictionarry database has also got an overhaul. We introduced the new encode efficiency index, 2160p Quality and Balanced profiles as well as other small improvements like editions, repacks and freeleech. Here are some scattered thoughts that you might also be interested in: \n- @Seraphys has been working on a scoring refactor that introduces 720p fallback, fixes streaming service names, and groups similar releases together better. It's a huge change that I haven't been able to fully test myself, but I've merged it into a separate branch because I know people are pretty antsy to start testing themselves. Anyone is free to give it a try, you just have to switch to the `scoring-refactor` branch in Profilarr. Please direct any issues / improvements to the database's [Issue Tracker](https://github.com/Dictionarry-Hub/database).\n- I'm personally not too happy with the state of the current database - poorly named files and renames/imports weren't taken into enough consideration and it's causing way too many download loops. I'm still trying to figure out exactly how I want to tackle these problems but I just want people to know that it is on my mind and it will be improved in future. \n\n### Tweaks \ud83d\udd27\n\nI talked about tweaks in detail [here](https://dictionarry.dev/devlog/profile_tweaks) and had actually implemented some of them into Profilarr, but decided to remove them at the last minute. On paper, it's an interesting system. In practice, it's confusing and really hard to program for. It's meant to be a database agnostic feature, but was hardcoded into Profilarr's profile system. I'm going to keep this feature on the roadmap as a maybe for now, but I'm going to have to completely rethink how to implement it from the ground up. \n\n## What's Next? \ud83d\udc49\n\nHere's a (non comprehensive) list of what you can expect me to work on now that Profilarr is in beta. \n\n### Profilarr\n\n- Media Management Sync - Databases will be able to implement their own media management settings (quality sliders, rename templates, delay profiles, etc) and use profilarr to sync them\n- Multi Database Support - Refactoring the database to use a dependency system that allows databases to act as layers and depend on layers above them. This lets profile databases exist independently of format databases and that independently of regex databases. This way, you'll be able to connect to multiple at once and build off them as you please (or just link a complete one). \n- Everything on the issue tracker: https://github.com/Dictionarry-Hub/profilarr/issues\n\n### Database\n\n- Efficiency Profiles - 1080p Efficient (10%), 1080p Efficient (22.5%) and 2160p Efficient will use the [Encode Efficiency Index](https://dictionarry.dev/wiki/EEi) to prioritise HEVC releases. \n- Anime Support - Likely just quality profiles, but I also want to explore alternative options that better support dynamic needs. We likely want to make release group tiers, but also figure out a way to prioritise releases from newer & better sources. I'm not personally into that much anime, so I'm going to need as much input as I can get from you guys ~ please start those conversations if you want something to be considered (some have already asked, I'll get back to you when I can!)\n- Better Streaming Service Grab Logic - This is already partially improved in Seraphys' refactor, but I would also like to add support for more streaming services and revise the interaction between release groups and sources. \n\n## Housekeeping \ud83e\uddf9\n\nWe've had an influx of new members over the past couple weeks, so I'd like to welcome you all to our discord \ud83d\udc4b Come say hey in #general if you haven't already. \n\n### Moderation, Wiki, Support \ud83e\udd1d\n\n- I'd like to introduce @Seraphys as our first moderator and designated detail devotee \ud83e\udd23 Big claps all around. \n- The rules, faq, links (among others) are very out of date and will be getting a refresh soon, stay tuned for those updates. \n- I will likely be closing the support post channels soon and replacing them with a single, simpler text channel and removing the bot integration. For any basic support, please message us over there, but for any major issues please redirect your queries to our issue trackers on GitHub from now on. [here](https://github.com/Dictionarry-Hub/profilarr/issues) and [here](https://github.com/Dictionarry-Hub/database)\n\n### Donations \ud83d\udcb8\n\nIf you've donated and would like a special 'Donor' role badge here on discord, please shoot me a PM. \n\n### Taking a Break \u23f8\ufe0f\n\nI want to let everyone know that I'll be taking a break for a little while ~ I spent the majority of the past 4-5 months working on Profilarr and I'm quite burnt out. I'm trying very hard to balance full time study with development, but they unfortunately just don't mesh the way I hoped they would. I can't not work at 100% for either, so something had to give and for the past month or so, that's been my sleep and sanity. I unfortunately can't delay my semester (as much as I want to), so I'm going to have to dial down the time I spend on Dictionarry/Profilarr. I think I'm going to do a proper break (no dev at all) for a couple weeks at least ~ until my easter break, then I'll slowly pick up speed again. Couple of specific points I want to mention here:\n- I'm going to stop giving ETAs for things. They always take longer than I expect them to, which puts pressure on me and probably disappoints you guys when something inevitably doesn't happen on time. The defacto answer to any ETA questions from now on will be \"when it's ready\". \n- I've been pretty scatterbrained lately, so if someone is waiting on a message from me just know that I haven't forgotten about you and will get back when I have the time. If it's been a while, shoot me a PM or something as a reminder ~ I'll still be active on discord during my break. \n\n### Thank You \ud83d\ude4f\n\nThis project has grown tremendously in scope in the last year and that's not possible without a community, so big thanks from me to all of you. I'm still figuring all of this out as I go along so it's kind of unbelievable how many people are using a tool that once only existed in my head. \n\nCheers, everyone.", + "last_modified": "2025-04-01T13:14:44.324274+00:00", + "title": "Profilarr is in Beta \ud83d\ude80", + "slug": "profilarr_is_in_beta", + "author": "santiagosayshey", + "created": "2024-1-4", + "tags": [ + "devlog", + "profilarr", + "database", + "housekeeping" + ] + }, { "_id": "Profile Selector v3", "content": "hey @everyone , thought I'd make a channel to share some development logs.\n\nI've been feeling pretty inspired code wise the past few days, so I've actually made some progress despite saying I would take a break...\n\nAnyways, after designing Profile Selector v3 in Figma for the past couple months, I started work on actually implementing it. Let me tell you that drawing shapes is much, much easier than coding them. After a couple days of regretting not paying attention in high school trigonometry, I have the basic functionality in place! We have three data points which represent each of the requirements - quality, efficiency, compatibility. The user can select points on each of the axes, and each combination is used to recommend a profile. It's not hooked up to the database yet, so random strings are being used as a placeholder.\n\nThe good thing about this design is that it's really modular. Once I finish the 'beginner' version of it, I'll be able to add an advanced mode which can be used to select any kind of requirement. Resolution, HDR, Audio, etc.\n\nHere's how it looks right now (obvious disclaimer that final version will look much much better):\n\n![Selector Proof of Concept](https://streamable.com/2uprnl)\n\nHere's a funny tidbit from development:\n\nI tried writing some animation styling to make the inner polygon look like its stretching (as opposed to instant, static movement). It didn't quite work..\n\nBehold: Frankenstein's Triangle.\n\n![Frankenstein's Triangle](https://streamable.com/z70sj8)", - "last_modified": "2025-03-26T11:42:47.239284+00:00", + "last_modified": "2025-04-01T13:14:44.324274+00:00", "title": "Profile Selector v3", "slug": "profile_selector_v3", "author": "santiagosayshey", @@ -43,7 +58,7 @@ { "_id": "Profile Tweaks", "content": "Hey @everyone, I've been hard at work on the next Profilarr version over the past few weeks and have new stuff to show off!\n\nThe profiles we make are meant to be (really good) starting points, not a strict standard on what you _should_ be grabbing. Up until now, profiles existed as singular entities that don't respect custom changes. Merge conflict resolution was a big step in the right direction for this (read more in the last dev log), but it's a bit more hands on, and not something I expect most people to engage with.\n\nEnter 'Profile Tweaks'. These are simple check boxes you can enable / disable and are unique to YOUR profiles. They will ALWAYS be respected, regardless of what updates we make to the base profile. For now, these tweaks include:\n\n- Prefer Freeleech\n- Allow Prereleases (CAMS, Screeners, etc)\n- Language Strictness\n- Allow Lossless audio\n- Allow Dolby Vision without Fallback\n- Allow bleeding edge codecs (AV-1, H266)\n\n(Some are only available for specific profiles, eg lossless audio for 1080p Encode profiles).\n\nIf anyone has any tweak ideas (even super specific ones), please let me know and I'll work on getting it integrated! Here's an image of the Tweaks Tab:\n\n## Profilarr Progress\n\n- Progress is steady, I've been working on it every day since my semester ended. It's taken way, way longer than I've expected (sorry!) but I'm happy with how it's starting to look.\n- Git integration is complete and working, but needs lots of testing.\n- Data modules (custom formats, regex patterns, quality profiles) are complete and fully implement the existing logic from Radarr / Sonarr.\n- I am currently in the progress of porting existing data to the new database (https://github.com/Dictionarry-Hub/database/tree/stable) in the new profilarr standard format. This is going to take a while, as I have to write descriptions, add tags, test cases, etc.\n- Finally, I am starting to work on the compilation engine (https://discord.com/channels/1202375791556431892/1246504849265266738/1272756617041154049) and the import module. Once these things are complete, and I'm confident we won't run into massive bugs, I'll release a beta docker image. ETA? I really don't know, but I'm working as hard as I can.\n\nIf anyone has any tweak ideas (even super specific ones), please let me know and I'll work on getting it integrated! Here's an image of the Tweaks Tab:\n\n![Profile Tweaks](https://i.imgur.com/fzbmJSn.png)", - "last_modified": "2025-03-26T11:42:47.239284+00:00", + "last_modified": "2025-04-01T13:14:44.324274+00:00", "title": "Profile Tweaks", "slug": "profile_tweaks", "author": "santiagosayshey", @@ -57,7 +72,7 @@ { "_id": "Shiny New Stuff", "content": "hey @everyone, hope you guys are well. Here's another update!\n\n# Motivation\n\nI've been really struggling to work on this project for a few months now - I'll finally get some time at the end of the week but feel completely unmotivated to work on it for more than an hour. Well... after cracking the architecture problem last week and seeing all the support from you guys, I've felt especially motivated to dive back in.\n\n# Profilarr v2 (not really v2 but it sounded cool)\n\nProfilarr is getting some really nice upgrades. Here's an outline of the most important ones:\n\n## It's now a full stack application.\n\nThis means we have a frontend: a site that users can visit to adjust, import, and export regexes, custom formats, and quality profiles. It's built in a way that aims to 'remaster' how it's implemented in Radarr/Sonarr. All the existing functionality is there, but with some really nice quality of life features:\n\n- **Single definition format**: As outlined in the previous dev log, Profilarr's version of this system will use a single definition format. Notably, this allows you to set regex patterns ONCE, then add that regex as a condition inside a custom format.\n- **Sorting and Filtering**: You can now sort and filter items by title, date modified, etc.\n- **Exporting/Importing**: The standard format now allows _everyone_ to import/export regexes, custom formats, and quality profiles freely - no need to query APIs to do this anymore.\n- **Syncing**: Instead of clogging up everyone's arrs with unused custom formats, the sync functionality now only imports _used_ items.\n- **Mass selection**: You can mass select items to import/export/sync/delete.\n- **Tags**: Instead of manual selection, you can set tags on specific custom formats/quality profiles that should be synced. This works similar to how Prowlarr uses tags to selectively sync indexers. Since we are also using the same database for the website, tags can also be used for little tidbits of information too. Like where a release group is an internal at!\n- **Testing**: Developers can now permalink regexes to regex101. This makes it really easy to develop and test simultaneously.\n- **Descriptions**: You can now explain what specific items are for. No need to look it up on the website to see what it does.\n\n## Backend Improvements\n\nThe backend is essentially what Profilarr is right now - a tool to sync some JSON files to your arrs. However, this also has some major improvements:\n\n- **Git integration**: You can select a remote repository to connect to and:\n - Add, commit, and push files; branch off; merge into. This isn't that useful for end users, but I cannot stress enough how much time and suffering this has saved me. Being able to revert regex/custom format/quality profiles to the last commit is my favorite thing I've ever coded.\n - **Branching**: You can have different branches for different things. Of course, this is useful for development, but it also allows you to do things like: separate setups for Radarr/Sonarr/Lidarr. Most importantly, it allows us developers to set stable, dev, and feature branches.\n - **Pulling**: You can now pull in changes from specific branches from a remote repository. You can view differences and decide if you want to pull these changes in. You can set it to be automatic and only alert on merge conflicts (you change something, but an incoming change for that item exists as well). You can choose to get the most stable branch or the latest features merged into develop.\n - **External sources**: You can set your own repo of regexes, custom formats, and quality profiles and share it with whoever you want. As I mentioned in my last dev log, I'll be working on a compiler to convert our standard Profilarr format with the existing arr format. The really cool thing about this is it works both ways. This means the git integration + compiler will allow you to use Profilarr with the trash guides. It'll probably take some tweaking, but I know it's definitely possible now.\n\n## Containerisation\n\nProfilarr will FINALLY be dockerised.\n\n# Development\n\nWith these changes in place, it has massively improved and sped up development. Working in a proprietary tool now allows me the freedom to just implement a feature whenever I want to. Want to filter custom formats with the release tier tag? Boom, implemented. Want to auto-apply scores to custom formats in quality profiles based on tags? Boom, implemented.\n\n## Machine Learning\n\nThis part is mostly speculation and rambling - nothing concrete yet. I really want to incorporate some kind of AI help into Profilarr. A button you can press to auto-generate regex or a custom format. I've read countless Reddit posts of someone unfamiliar with regex/custom formats/profiles asking for help in trying to learn. \"How do I write a custom format that matches x265 releases under size x?\" It's so easily solved using AI.\n\nI want to implement this one day, I just don't have enough knowledge or experience to do it yet. The best I've come up with is something that sends a request to OpenAI's API with a prompt. The results are less than ideal. But just imagine the future where some kind of machine learning tool has access to an entire database of regexes, custom formats, and quality profiles curated by hundreds of people, and can use that knowledge to predict patterns and truly tailor stuff to suit people's needs. Who knows if it ever gets to that point, but that's my vision for Dictionarry.\n\nRamble over, as you can tell I've been feeling pretty motivated lately!\n\nAnyway, here's some images of profilarr v2.\n\n**Regex Page**:\n\n![Regex Page](https://i.imgur.com/kMZ9qII.png)\n\n**Custom Format Page**:\n\n![Custom Format Page](https://i.imgur.com/mCyDxId.png)\n\n**Status Page**:\n\n![Status Page](https://i.imgur.com/ZleeOEF.png)\n\nOf course, everything is still a heavy work in progress.\n\nThat's all for today!", - "last_modified": "2025-03-26T11:42:47.239284+00:00", + "last_modified": "2025-04-01T13:14:44.324274+00:00", "title": "Shiny New Stuff", "slug": "shiny_new_stuff", "author": "santiagosayshey", @@ -70,7 +85,7 @@ { "_id": "Vision Almost Realised", "content": "Hey @everyone, small log for today!\n\n```bash\n$ python profile_compile.py 'profiles/1080p Encode.yml' '1080p Encode (sonarr - master).json' -s\nConverted profile saved to: 1080p Encode (sonarr - master).json\n\n$ python importarr.py\nImporting Quality Profiles to sonarr : Master\nUpdating '1080p Encode' quality profile : SUCCESS\n```\n\nThese two commands are the culmination of the architecture overhaul I talked about in August: https://discord.com/channels/1202375791556431892/1246504849265266738/1272756617041154049. The Profilarr standard format _**works**_. A typical profile is now about 300 lines (down from 1000 each for radarr / sonarr), is able to be compiled from PSF to Radarr OR Sonarr (and back!). Regex patterns allow format resolution, so no more editing the same thing 5, 10... 20 times.\n\nI'm currently in the process of hooking up the database to the new website, and that's looking pretty cool too. I cannot even explain how good it feels to be able to edit a profile once inside Profilarr, push those changes directly from Profilarr, have those changes reflected as incoming changes for end users, and as updated information on the website all in one fell swoop.\n\nIt's taken a huge effort the past 4 months, and I still have to actually connect it to the backend, but I'm fairly happy with how it's turned out. The changes won't be all that evident right away for you guys, but it's going to save me (and anyone who wants to contribute) hours upon hours of development time for everything that I have planned.\n\n## Golden Popcorn Performance Index Changes\n\nThe current GPPi algorithm is strong, but fundamentally flawed. It does not take into consideration release groups who have no data. There are terrific new groups (ZoroSenpai for example) who should be tier ~2 at least, but aren't simply because they have no data. How do we fix this?\n\n### Popularity\n\nFor every encode at a specific resolution for a movie / tv show that is currently _popular_, a release group receives +1 score to their GPPi. At the end of every month, the score is reset, and the previous score is normalized (tbd on how) and added to their permanent GPPi score (up to a certain point and probably never past tier ~3)\n\nThis process will be completely automatic and will hopefully solve the problem of new good release groups.\n\n### Grouping\n\nThe previous 'tiers' for release groups was just natural intuitive grouping. Humans are surprisingly very, very good at pattern recognition so it was never really a problem. However, it was manual, and we dont like manual around here. Enter 'K Means Clustering'. Essentially it's just a fancy algorithm that finds natural break points between groups of numbers. Using K means, I've dropped the number of 1080p Tiers from 7 down to 5 which in turn has increased immutability. Small changes, but will be important in the long run.\n\n## Thank You!\n\nThat's all for today, I hope everyone's doing alright and enjoying the holidays :grinning:", - "last_modified": "2025-03-26T11:42:47.239284+00:00", + "last_modified": "2025-04-01T13:14:44.324274+00:00", "title": "Vision (Almost) Realised", "slug": "vision_almost_realised", "author": "santiagosayshey", @@ -84,7 +99,7 @@ { "_id": "Website 2.0", "content": "Hey everyone, medium-ish update today.\n\n## Website 2.0\n\nI've wanted to transition away from the old site / mkdocs for a while now as its quite hard to maintain and keep everything up to date, so I built a new site using Next.js that uses ISR to rebuild its content using the dictionarry database. Basically this just means:\n\n- Database gets an update -> Website sees its data is stale -> Website rebuilds itself with new data -> Santiago smiles in not needing to do anything\n\nThis all ties into the whole \"write once\" philosophy that I instilled with Profilarr and has made development much easier. There are still quite a few layout issues and perhaps a devlog refactor I need to fit in somewhere, but I'm happy to share it with you guys as it is.\n\n[Website 2.0](https://dictionarry.dev/)\n\n![website2.0](https://i.imgur.com/eORTwml.png)\n\nThe old site will go down soon, sorry if I broke anyone's workflows D:\n\n### Profile Selector?\n\nThis idea has gone through many iterations since i started Dictionarry last year.\n\n1. A static flowchart with not nearly enough information / choice: https://github.com/santiagosayshey/website/blob/030f3631b4f6fffdb7fa9f4696e5d12defc84a46/docs/Profiles/flowchart.png\n2. The \"Profile Selector\" (terrible name): https://selectarr.pages.dev/\n3. Frankenstein's triangle: [Discord Link](https://discord.com/channels/1202375791556431892/1246504849265266738/1246536424925171925)\n\nFrankenstein's triangle was supposed to be what i shipped with the new website (and I actually finished it too!). It worked by calculating the area of the efficiency/quality/compatibility triangle using some formula named after some guy i forget, to guesstimate user choice based on their previous selection. It did this by normalizing the \"score\" of each profile on each of it's axes and finding the best fitting triangle that used the axis that was changed.\n\nResults were pretty good but I felt that it abstracted _too much_ of what made any user choice meaningful so I decided to scrap it.\n\n### Profile Builder!\n\nIn it's place is the \"Profile Builder\" (maybe also a terrible name). It still attempts to abstract audio/video down into more quantifiable groupings, but limits itself to explanations of certain things where more abstraction is detrimental. It's pretty self explanatory once you use it, but basically you choose through increasingly niche groupings -> resolution -> compression -> encode type -> codec -> HDR. At each step, a list of recommended profiles will be shown. I think this new system helps to fix the \"trying to get the profile I want\" issue as it starts pretty broad and gets increasingly more specific the more things you choose. It's up now, give it a playwith; let me know if its good / bad / needs changes: [Profile Buider](https://dictionarry.dev/builder)\n\n![Profile Builder](https://i.imgur.com/ka8KSHl.png)\n\n## Encode Efficiency Index\n\nHere we go, meat and potatoes. This is another release group metric just like the Golden Popcorn Performance Index. Heres's the play-by-play:\n\n- It evaluates release groups on their average compression ratio (how big their encode is compared to a source), to discern quality and/or efficiency.\n- It can discern transparency by targeting ratios at which a codec begins to \"saturate\"\n- It can discern efficiency by targeting ratios at which a codec reaches it's \"efficiency apex\"\n\nThis is a heavily watered down explanation of the metric, you can read about it (with examples), in very heavy detail [here](https://dictionarry.dev/wiki/EEi). Months of research and iteration has gone into this, and I really think this is Dictionarry's biggest asset so far. When AV1 profiles become a thing, this metric is ready for it.\n\n#### No More Parsing Codecs!!!!\n\nIf you parse the efficiency of a release group directly, then you know youre getting something at a file size you want. This means we don't have to use h265 / x265 as a ridiculous proxy baseline to find content we want anymore. We can just downrank all h264 instead which is much more reliable\n\n#### 2160p Quality (Encode) Profile + Release Group Tierlist!!!!!!!!\n\nUsing EEI, we target 4k release groups at 55% target ratio to discern transparency. No golden popcorns needed, no complex trump parsing crap. No \"popular\" vote. Whenever something isn't documented, we simply add that movie / tv show to the data source and groupings update automatically. It's almost like magic.\n\nThis metric has made the 2160p Quality profile possible and i dare say it's the most comprehensive one I've worked on thus far. Give the quality profile and tier lists a read here:\n\n- [216p Quality Profile](https://dictionarry.dev/profiles/2160p-quality)\n- [2160p Quality Release Group Tiers](https://dictionarry.dev/tiers/2160p/quality)\n\n#### Thanks\n\n- Thanks to @seraphys for helping out with the profile creation / giving constant feedback.\n- Thanks to @erphise for being a tester / the catalyst for the creation of this metric. If they hadn't been testing out the HEVC profile, we never would have talked about compression ratios which never meant I got the idea for the metric in the first place.\n\nShow them some love.\n\n## Profilarr\n\nAlmost done, I took a break for a couple weeks to finish up the website but I'm gonna get rolling again soon. I just finalized authentication, database migrations and the pull module. The only major thing left is getting everything ready for production. This means setting up the docker image, unraid template, etc, etc. It's hard to say how long this is gonna take since I'm basically learning it all on the fly so bare with me on this. But, it's almost done and a beta test will be out soon (hopefully)", - "last_modified": "2025-03-26T11:42:47.239284+00:00", + "last_modified": "2025-04-01T13:14:44.324274+00:00", "title": "Website 2.0", "slug": "website2.0", "author": "santiagosayshey", diff --git a/bundles/version.json b/bundles/version.json index 6dac58c..c84fd55 100644 --- a/bundles/version.json +++ b/bundles/version.json @@ -1,5 +1,5 @@ { - "updated_at": "2025-03-26T11:42:52.608966+00:00", + "updated_at": "2025-04-01T13:14:47.091277+00:00", "folders": [ "custom_formats", "profiles", diff --git a/bundles/wiki.json b/bundles/wiki.json index fd36ce3..b8bdc91 100644 --- a/bundles/wiki.json +++ b/bundles/wiki.json @@ -2,7 +2,7 @@ { "_id": "EEi", "content": "This metric is aimed at identifying and ranking release groups based on their propensity to release **encodes that meet certain compression ratios**, with particular focus on **HEVC** releases where optimal efficiency occurs in specific bitrate ranges. By ranking these groups, we effectively prioritize releases that maximize HEVC's compression capabilities while maintaining quality at minimal file sizes.\n\n## What is a Compression Ratio?\n\nA compression ratio is a (made up) metric that evaluates encodes against their sources. We express this as the **encoded file size as a percentage of its source size** (typically a **remux** or **WEB-DL**).\n\nFor example:\n\n| Movie | Source (Remux) | Encode | Compression Ratio |\n| ------- | -------------- | ------ | ----------------- |\n| Movie A | 40 GB | 10 GB | 25% |\n| Movie B | 30 GB | 6 GB | 20% |\n| Movie C | 50 GB | 15 GB | 30% |\n\n## Why Is This Important?\n\nUnderstanding compression ratios helps balance two competing needs: **maintaining high video quality while minimizing file size**. Modern codecs like **HEVC** have a **\"sweet spot\"** where they deliver excellent quality with significant size savings. Finding this optimal point is crucial because:\n\n- Storage and bandwidth are always **limited resources**\n- Going beyond certain bitrates provides **diminishing quality returns**\n- Different codecs have different **efficiency curves**\n- Release groups need clear standards for **quality vs. size trade-offs**\n\n## What Ratio is Best?\n\nThere's no one-size-fits-all answer when it comes to choosing the perfect compression ratio. The \"best\" ratio **depends entirely on your specific needs**. At 1080p:\n\n- Space-conscious users might prefer **smaller files (5-10% of source)** with quality trade-offs\n- Quality-focused users might push towards **higher quality (30-40% of source)** for transparency\n- Most users find a sweet spot in the middle\n\nHowever, there are technical limits - files larger than **40% for 1080p** and **60% for 2160p** provide no meaningful benefits.\n\n## Why Set Maximum Ratios of 40% and 60%?\n\nThe compression ratio ceilings are set based on different factors for 1080p and 2160p content:\n\n### 1080p (40% Maximum)\n\nThe 40% ceiling for 1080p exists because we can roughly measure where **HEVC stops being efficient compared to AVC**. We do this using two key video quality metrics:\n\n- **VMAF** - analyzes how humans perceive video quality and scores it from 0-100\n- **BD-Rate** - tells us how much smaller one encode is compared to another while maintaining the same quality level\n\nUsing these tools together shows us that:\n\n- HEVC achieves **20-40% smaller files** in the mid-bitrate range (~2-10 Mbps for 1080p)\n- These space savings are consistent across different quality levels\n- Beyond this point, both codecs achieve **near identical quality**\n- At ratios above 40%, **AVC becomes preferred** due to better tooling and quality control\n\n### 2160p (60% Maximum)\n\nThe 60% ceiling for 2160p content is based on different considerations:\n\n- This is approximately where **visual transparency** becomes achievable\n- Higher ratios provide **diminishing returns**\n- At this compression level, content achieves **VMAF scores above 95**\n- **Storage efficiency** becomes critical due to larger base file sizes\n- Quality improvements become **increasingly subtle** beyond this point\n\nRead these articles to better understand how VMAF and BD-Rate tell us how efficient a codec is[^1][^2]:\n\n## How Do We Apply This Index?\n\nThe ranking system works by calculating how close each Release Group / Streaming Service comes to achieving a user's desired compression ratio. This is done through a few key steps:\n\n1. **Delta Calculation**: We calculate the absolute difference (delta) between a group's average compression ratio and the target ratio. For example, if a group averages 25% compression and our target is 20%, their delta would be |25 - 20| = 5 percentage points.\n\n2. **K-means Clustering**: We use k-means clustering to automatically group release groups into tiers based on their deltas. K-means works by:\n - Starting with k random cluster centers\n - Assigning each group to its nearest center\n - Recalculating centers based on group assignments\n - Repeating until stable\n\n# Example Rankings\n\n## 1080p Examples\n\n### Example 1: Users prioritizing storage efficiency (10% target)\n\nUsers might choose this very aggressive compression target when:\n\n- Managing large libraries on limited storage\n- Collecting complete series where total size is a major concern\n- Primarily viewing on mobile devices or smaller screens\n- Dealing with bandwidth caps or slow internet connections\n\n| Tier | Group | Efficiency | Delta |\n| ---- | ----------------------- | ---------- | ----- |\n| 1 | iVy | 9.37% | 0.63 |\n| 1 | PSA | 7.89% | 2.11 |\n| 2 | Vyndros | 16.08% | 6.08 |\n| 2 | Chivaman | 16.80% | 6.80 |\n| 2 | Amazon Prime (H.265) | 16.15% | 6.15 |\n| 3 | Disney+ (H.265) | 20.32% | 10.32 |\n| 3 | TAoE | 22.78% | 12.78 |\n| 3 | QxR | 23.25% | 13.25 |\n| 3 | BRiAN | 25.16% | 15.16 |\n| 3 | Movies Anywhere (H.265) | 26.05% | 16.05 |\n| 4 | MainFrame | 37.63% | 27.63 |\n| 4 | NAN0 | 37.71% | 27.71 |\n\n### Example 2: Users seeking balanced quality and size (25% target)\n\nThis moderate compression target appeals to users who:\n\n- Have reasonable storage capacity but still want efficiency\n- Watch on mid to large screens where quality becomes more noticeable\n- Want a good balance between visual quality and practical file sizes\n\n| Tier | Group | Efficiency | Delta |\n| ---- | ----------------------- | ---------- | ----- |\n| 1 | BRiAN | 25.16% | 0.16 |\n| 1 | Movies Anywhere (H.265) | 26.05% | 1.05 |\n| 1 | QxR | 23.25% | 1.75 |\n| 1 | TAoE | 22.78% | 2.22 |\n| 2 | Disney+ (H.265) | 20.32% | 4.68 |\n| 3 | Amazon Prime (H.265) | 16.15% | 8.85 |\n| 3 | Chivaman | 16.80% | 8.20 |\n| 3 | Vyndros | 16.08% | 8.92 |\n| 3 | MainFrame | 37.63% | 12.63 |\n| 3 | NAN0 | 37.71% | 12.71 |\n| 4 | iVy | 9.37% | 15.63 |\n| 4 | PSA | 7.89% | 17.11 |\n\n## 2160p Examples\n\n### Example 3: Extreme Space Saving (20% target)\n\nThis aggressive 2160p compression appeals to users who:\n\n- Want to maintain a 4K library on limited storage\n- Primarily view content at typical viewing distances where subtle quality differences are less noticeable\n- Need to conserve bandwidth while still enjoying 4K resolution\n- Have a large collection of 4K content and need to balance quality with practical storage constraints\n\nTODO: EXAMPLES\n\n### Example 4: Balanced 4K (40% target)\n\nThis middle-ground approach is ideal for users who:\n\n- Have decent storage capacity but still want reasonable efficiency\n- Watch on larger screens where quality differences become more apparent\n- Want to maintain high quality while still keeping files manageable\n- Need reliable HDR performance without excessive file sizes\n\nTODO: EXAMPLES\n\n### Example 5: Near Transparent Quality (60% target)\n\nThis higher bitrate target is chosen by users who:\n\n- Have ample storage and prioritize maximum quality consciously\n- Watch on high-end displays where subtle quality differences are noticeable\n- Want to maintain archive-quality collections\n- Focus on difficult-to-encode content where compression artifacts are more visible\n\nTODO: EXAMPLES\n\nThese examples demonstrate how different groups excel at different target ratios, and how streaming services tend to maintain consistent compression approaches regardless of user preferences. The rankings help users quickly identify which releases will best match their specific quality and size requirements.\n\n## Frequently Asked Questions\n\n| Question | Answer |\n| -------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Why not just detect h265/x265 releases? Isn't that simpler? | This is a common misconception that \"HEVC = smaller = better\". While it's true that HEVC/x265 _can_ achieve better compression than AVC/x264, simply detecting the codec tells us nothing about the actual efficiency of the specific encode. A poorly encoded HEVC release can be larger and lower quality than a well-tuned x264 encode. By focusing on compression ratio instead of codec detection, we measure what actually matters - how efficiently the release uses storage space while maintaining quality. This approach has several advantages:

- It rewards efficient encodes regardless of codec choice
- It catches inefficient HEVC encodes that waste space
- It avoids the complexity of parsing inconsistent HEVC labeling (h265/x265)
- It future-proofs the system for newer codecs like AV1, where we can simply adjust our codec ranking priorities (AV1 > HEVC > AVC) while still maintaining the core efficiency metric

Think of it this way: users don't actually care what codec is used - they care about getting high quality video at reasonable file sizes. Our metric measures this directly instead of using codec choice as an unreliable proxy. |\n| But doesn't this ignore quality? | The current encoding landscape places tremendous emphasis on maximizing absolute quality, often treating file size as a secondary concern. This metric aims to challenge that, or at least find a middle ground - we care about quality (hence why we use proper sources as our baseline and consider VMAF scores), but we acknowledge that most users only care about getting file sizes they actually want, and not the marginal quality improvements you get from encoding from a remux, compared to a web-dl. Rather than taking either extreme position - \"quality above all\" or \"smaller is always better\" - we focus on _efficiency_: getting the best practical quality for any given file size target. This approach **will not** satisfy quality enthusiasts, but it better serves the needs of most users. |\n| What if the source is not a 1080p remux? How do you tell? | This metric, like any data-driven system, will never achieve 100% accuracy. However, we can parse various indicators beyond just the release group or streaming service to identify non-remux sources. For example, we can identify when a non-DS4K WEB-DL or non-webrip from a reputable group is likely sourced from another lossy encode rather than a remux. We also maintain a manual tagging system to downrank certain release groups known for reencoding from non-high-quality sources. Groups like PSA and MeGusta will be ranked lower in the system, regardless of their efficiency scores, due to their known practices. |\n| How do you prefer HEVC? | We actually approach this from the opposite direction - instead of preferring HEVC, we downrank AVC. This is because HEVC naming conventions are inconsistent (groups use x265 and h265 interchangeably), making them difficult to parse reliably. In contrast, AVC is almost always labeled consistently as either x264 or h264, making it much easier to identify and downrank these releases. |\n| Why not consider releases above 40% efficiency? | For standard 1080p non-HDR content, above 40% compression ratio, x264 and x265 perform nearly identically in terms of VMAF scores, eliminating HEVC's key advantages. At this point, x264 becomes the preferred choice across all metrics - the encodes are easier to produce, far more common, and typically undergo more rigorous quality control. There's simply no compelling reason to use HEVC at these higher bitrates for standard 1080p content. |\n| What about animated content? | Animated content typically has different compression characteristics than live action - it often achieves excellent quality at much lower bitrates due to its unique properties (flat colors, sharp edges, less grain). Ideally, we would use higher target ratios for live action and lower ones for animation. However, reliably detecting animated content programmatically is extremely challenging. While we can sometimes identify anime by certain keywords or release group patterns, western animation, partial animation, and CGI-heavy content create too many edge cases for reliable detection. For now, we treat all content with the same metric, acknowledging this as a known limitation of the system. Users seeking optimal results for animated content may want to target lower compression ratios than they would for live action material, perhaps via a duplicate profile at a different compression target. |\n| Why does transparency require 60% at 2160p compared to 40% at 1080p? | The higher ratio requirement for 2160p content stems from several technical factors that compound to demand more data for achieving transparency:

1. **Increased Color Depth**: Most 2160p content uses 10-bit color depth compared to 8-bit for standard 1080p content. This 25% increase in bit depth requires more data to maintain precision in color gradients and prevent banding.

2. **HDR Requirements**: 2160p content often includes HDR metadata, which demands more precise encoding of brightness levels and color information. The expanded dynamic range means we need to preserve more subtle variations in both very bright and very dark scenes.

3. **Resolution Scaling**: While 2160p has 4x the pixels of 1080p, compression efficiency doesn't scale linearly. Higher resolution reveals more subtle details and film grain, which require more data to preserve accurately.

These factors combine multiplicatively rather than additively, which is why we need a 50% increase in the compression ratio ceiling (from 40% to 60%) to achieve similar perceptual transparency. |\n| Do all 2160p releases need 60% for transparency? | No, the actual requirements vary significantly based on several factors:

1. **Content Type**:
- Animation might achieve transparency at 30-40%
- Digital source material (like CGI-heavy films) often requires less
- Film-based content with heavy grain needs the full 60%

2. **HDR Implementation**:
- SDR 2160p content can often achieve transparency at lower ratios
- Dolby Vision adds additional overhead compared to HDR10
- Some HDR grades are more demanding than others

3. **Source Quality**:
- Digital intermediate resolution (2K vs 4K)
- Film scan quality and grain structure
- Original master's bit depth and color space

4. **Scene Complexity**:
- High motion scenes need more data
- Complex textures and patterns require higher bitrates
- Dark scenes with subtle gradients are particularly demanding |\n\n[^1]: Shen, Y. (2020). \"Bjontegaard Delta Rate Metric\". Medium Innovation Labs Blog. https://medium.com/innovation-labs-blog/bjontegaard-delta-rate-metric-c8c82c1bc42c\n[^2]: Ling, N.; Antier, M.; Liu, Y.; Yang, X.; Li, Z. (2024). \"Video Quality Assessment: From FR to NR\". Electronics, 13(5), 953. https://www.mdpi.com/2079-9292/13/5/953", - "last_modified": "2025-03-26T11:42:47.249284+00:00", + "last_modified": "2025-04-01T13:14:44.334274+00:00", "title": "Encode Efficiency Index", "slug": "EEi", "author": "santiagosayshey", @@ -17,7 +17,7 @@ { "_id": "FAQ", "content": "This entry is dedicated to providing answers to the most frequently asked questions about Dictionarry / Profilarr.\n\n| Question | Answer |\n| ------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Why isn't the highest scored release being grabbed? | You may have prefer propers and repacks on. This option forces releases with a proper / repack flag to be grabbed, even if it's Custom Format score is not the highest. To turn it off, navigate to Settings > Media Management > File Management and set Prefer Propers / Repacks to Do Not Prefer. |\n| What's the difference between h264, x264, AVC, h265, x265 and HEVC? | **H.264 (AVC)**: A video compression standard.
**x264**: An open source encoder that produces H.264 videos.
**H.265 (HEVC)**: A more advanced video compression standard than H.264, offering better compression and quality for 4K and higher resolutions.
**x265**: An open source encoder that produces H.265 videos.

**Key Points**:
- HEVC/AVC refers to the codec in general
- H.264/5 refers to a lossless rip (WEB-DL or remux)
- x264/5 refers to encoded content (WEBRip or Blu-ray encode)

_Note: Many HEVC files are mislabeled, making it challenging to distinguish between lossless and lossy releases based on release names alone._ |\n| What quality settings should I use? | It's suggested that you should set everything to min / max since Profilarr uses custom formats to do the major selections. However you might run into the occasional sample download if you use lots of usenet indexers. If you do find that these are being grabbed, then you can set the minimum to be 1-2gb per hour for whatever quality you need it in. |\n| What does \"Transparency\" mean? | Audiovisual transparency refers to the degree to which an encoded audio or video signal is indistinguishable from the original source signal. The term \"transparency\" stems from the idea that the encoding and decoding processes are imperceptible, as if the system were _transparent_.

- An audio codec with high transparency will produce an encoded signal that, when decoded, is identical to the original audio source, without any discernible differences in frequency response, dynamic range, or noise floor.

- A video codec exhibiting transparency will generate an encoded signal that, upon decoding, results in a picture that is visually indistinguishable from the source video in terms of resolution, color space, and pixel-level detail.

Objective metrics, such as [VMAF (Video Multi-Method Assessment Fusion)](https://en.wikipedia.org/wiki/Video_Multimethod_Assessment_Fusion), are sometimes used to measure transparency by comparing the encoded signal to the original source and calculating a numerical score that quantifies the perceptual similarity between the two, with higher scores indicating greater transparency. |", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.334274+00:00", "title": "FAQ", "slug": "faq", "author": "santiagosayshey", @@ -31,7 +31,7 @@ { "_id": "GPPi", "content": "## What are Golden Popcorns?\n\n**_Golden Popcorns_** are _very high quality encodes_, marked as such by one of the best private torrent trackers. These releases are manually reviewed by a dedicated, experienced team of _Golden Popcorn_ checkers. Golden Popcorns are the simplest way to quantify a subjective _best_ encode.\n\n## The Decision Engine\n\nThe Golden Popcorn Performance Index, or GPPI, is a calculated metric, pivotal to the [Transparent](../Profiles/1080p%20Transparent.md) profile's decision-making process. It's engineered to rank release groups based on their propensity to release a Golden Popcorn encode at any given resolution $r$.\n\n## Formula\n\nOn first glance, it seems the most obvious way to determine which release groups are most likely to release golden popcorns is to find their Golden Popcorn Ratio, i.e. The number of Golden Popcorns divided by the total number of encodes for any given resolution _r_.\n\nHowever, If we were to take Golden Popcorn ratio at face value, we might incorrectly prioritise a release group who has a high GP ratio, but a low number of encodes. On the opposite spectrum, if we take the raw number of Golden Popcorns for any group, we might incorrectly prioritise a group with a low GP ratio.\n\nSo instead, we multiply the number of Golden Popcorns at resolution $r$ for a given release group, by a factor of said release group's Golden Popcorn Ratio. This essentially limits both metrics as a factor of each other.\n\nFor any given resolution _r_, the GPPI is defined as:\n\n$$\n\\begin{aligned}\n\\text{GPPI}_r &= GPE_r \\cdot \\left( \\frac{GPE_r}{E_r} \\right) \\\\\n &= \\frac{GPE_r^2}{E_r}\n\\end{aligned}\n$$\n\nWhere:\n\n- $\\text{GPPI}_r$ is the Golden Popcorn Performance Index at resolution $r$\n- $GPE_r$ is the number of Golden Popcorns at resolution $r$\n- $E_r$ is the total number of encodes at resolution $r$", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.334274+00:00", "title": "Golden Popcorn Performance Index", "slug": "GPPi", "author": "santiagosayshey", @@ -46,7 +46,7 @@ { "_id": "RGP", "content": "## So, how does Dictionarry _actually simplify media automation?_\n\nWell, first we need to understand that we're trying to **automate the subjective analysis of how \"good\" a release is**. To do that, we need to first define **what \"good\" even means**. To some people, it could mean how well something looks on their screen, or sounds through speakers; we define this as _quality_. To others, it means how many releases they can download while still maintaining some kind of quality standard; we define this as _efficiency_.\n\nSo, that leads us to a new question - _how do we measure quality and efficiency_? You might think we'd want to parse releases and find their technical properties; resolution, bitrate, video / audio codecs, hdr, etc.\n\n```\nRelease 1 (25.2 GiB): Blockbuster Movie A 2022 Hybrid 1080p WEBRip DDPA5.1 x264-group A\n\nRelease 2 (27.3 GiB): Blockbuster Movie A.1080p.WEBRip.DD+7.1.x264-group B\n```\n\nLooking at these two releases, you'll notice that they both have the EXACT same technical specification and would rank equally. But they're different sizes... so which is better? Using audio / video properties to measure quality / efficiency can be effective, but is largely **limited by the information that they convey**. You can't adequately answer which is better just by looking at these releases in isolation. So how do we not look at these releases in isolation? Or rather, how do we _extrapolate information that isn't already there?_\n\n### Group Tags\n\nOur answer lies in the little bit of information at the end of every release - it's **group tag**. Dictionarry tracks historic release group data in order to **rank groups based on their propensity to reach quantifiable levels of quality and efficiency**. We do this using two metrics:\n\n1. Golden Popcorn Performance Index (GPPi): How many golden popcorns a release group has, as a ratio of their total number of releases\n2. Encode Efficiency Index (EEi): The average size of a release group's encode compared to it's likely source.\n\nThese metrics are **evidence based, data driven and objective**.\n\n### TL;DR\n\nTL;DR: Dictionarry **simplifies media automation by prioritizing release groups that achieve quantifiable levels of quality and efficiency through objective measurement**. These release group rankings are built and maintained as custom formats to be scored in their respective quality profiles. You can review these group rankings below.", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.334274+00:00", "title": "Release Group Philosophy", "slug": "RGP", "author": "santiagosayshey", @@ -62,7 +62,7 @@ { "_id": "development", "content": "Profilarr functions as both a synchronization tool for end users and a complete development platform for developers. While most users will simply connect to existing databases to receive updates, Profilarr's development capabilities allow for creating, testing, and contributing custom media configurations back to the community through its Git integration.\n\n## Setting Up Your Database Repository\n\nTo use Profilarr's development features, you'll need a GitHub repository for your database. You have two options:\n\n### Option 1: Fork a PSF Database\n\n1. Go to https://github.com/Dictionarry-Hub/database (or any other Profilarr Standard Format Database)\n2. Click the \"Fork\" button in the top-right corner\n3. Follow the prompts to complete the fork process\n4. Your forked repository will now be ready to use with Profilarr\n\n### Option 2: Create a New Database Repository\n\n1. Click the \"+\" in the top-right corner and select \"New repository\"\n2. Give your repository a name (like \"profilarr-database\")\n3. Set visibility to public or private as needed (it needs to be public if you intend to share it)\n4. Click \"Create repository\"\n5. Clone the repository to your local machine\n6. Create three folders: `custom_formats`, `regex_patterns`, and `profiles`\n7. Add a `.gitkeep` file in each folder (this empty file is necessary to ensure Git tracks these folders; otherwise, they won\u2019t be included in the repository, which may cause errors in Profilarr)\n8. Commit and push these changes to your repository\n\n## Development Configuration\n\n### Generate a GitHub Personal Access Token (PAT)\n\nTo allow Profilarr to connect and push to your remote database, you'll need to generate a GitHub Personal Access Token (PAT). This token gives Profilarr permission to access and update your GitHub repository.\n\n1. Sign in to your GitHub account\n2. Go to Settings > Developer settings > Personal access tokens\n3. Click \"Generate new token\"\n4. Choose **Fine-grained**\n5. Give your token a descriptive name (e.g., \"Profilarr Development\")\n6. Apply the following permissions:\n - **Repository access:** Select your database repository\n - **Permissions:** Set `contents` and `metadata` to **Read & Write**\n7. Click \"Generate token\"\n8. Copy your new token (make sure to save it somewhere safe, as you won\u2019t be able to see it again)\n\n### Configure Your User Information\n\nYou'll also need to provide a username and email for Git. These will be associated with any commits you make to the database:\n\n- **Username**: This will appear in commit logs and will be visible to other contributors\n- **Email**: This will be used for Git commits and may be visible in public repositories\n\n### Create an Environment File\n\nCreate a `.env` file with the following information. This is required for database contributions:\n\n```\nGIT_USER_NAME=your_username\nGIT_USER_EMAIL=your_email\nPROFILARR_PAT=your_github_pat\n```\n\n\u26a0 **Security Note:** Avoid committing `.env` files containing secrets to public repositories. If working on a shared system, store credentials in a separate `.env.local` file or configure them directly in Docker. To ensure these files are ignored by Git, add the following entry to your `.gitignore` file:\n\n```\n.env\n.env.local\n```\n\n## Setup\n\nWith your credentials configured, you can now deploy Profilarr for development.\n\n### Docker Compose (recommended)\n\n```yaml\nservices:\n profilarr:\n image: santiagosayshey/profilarr:latest # or :beta for pre-release versions\n container_name: profilarr\n ports:\n - 6868:6868\n volumes:\n - /path/to/your/data:/config\n environment:\n - TZ=UTC # Set your timezone\n env_file:\n - .env # Required for database contributions\n restart: unless-stopped\n```\n\n### Docker CLI\n\n```bash\ndocker run -d \\\n --name=profilarr \\\n -p 6868:6868 \\\n -v /path/to/your/data:/config \\\n -e TZ=UTC \\\n --env-file .env \\\n --restart unless-stopped \\\n santiagosayshey/profilarr:latest # or :beta for pre-release versions\n```\n\n### Unraid\n\nFor Unraid users, the Profilarr Community App includes placeholders for required environment variables. To enable development mode, you must replace these placeholders with your actual credentials:\n\n- `GIT_USER_NAME`\n- `GIT_USER_EMAIL`\n- `PROFILARR_PAT`\n\n## Verification\n\nTo confirm that everything is set up correctly, check the startup logs for Git user initialization. The logs should include entries similar to the following:\n\n```\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Initializing Git user\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Configuring Git user\nprofilarr | 2025-03-18 20:08:35 - app.init - DEBUG - Retrieved Git config: Name - santiagosayshey, Email - user@example.com\nprofilarr | 2025-03-18 20:08:35 - app.db.queries.settings - DEBUG - PAT status verified\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Git user configuration completed\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Git user initialized successfully\n```\n\n## Troubleshooting\n\nIf you encounter issues with your development setup:\n\n| Issue | Possible Solution |\n| -------------------------------------------- | ----------------------------------------------------------------------------------- |\n| **GitHub token not working** | Verify your PAT has `contents` and `metadata` read/write permissions |\n| **Profilarr fails to access the repository** | Ensure your repository is public (or your token has access to private repositories) |\n| **Git username/email not recognized** | Run `git config --global user.name` and `git config --global user.email` to verify |\n| **Cannot push to repository** | Ensure your container has network access to GitHub (try `ping github.com`) |\n| **Updated `.env` not applied** | Remove and recreate the container to reload environment variables |\n\nFor additional help or to contribute to Profilarr, join our community on [GitHub](https://github.com/santiagosayshey/profilarr) or [Discord](https://discord.gg/Y9TYP6jeYZ).\n\n## Contributing to Databases\n\n1. **Link Your Fork in Profilarr**\n\n - Open Profilarr and navigate to the database settings.\n - Enter the GitHub repository URL of your forked database.\n\n2. **Make Changes in Profilarr**\n\n - Use Profilarr's built-in tools to modify or add database entries.\n - Profilarr will handle formatting and validation automatically.\n\n3. **Commit and Push Changes**\n\n - Profilarr provides actions to **revert, stage, commit, and push** changes.\n - After making changes, stage them using the **Stage** button.\n - Once staged, commit the changes with a commit message.\n - Finally, use the **Push** button to send your changes to your GitHub fork.\n - Roll back any unwanted changes using the **Revert** button.\n\n4. **Create a Pull Request (PR)**\n - Go to your fork on GitHub and navigate to the \"Pull Requests\" tab.\n - Click \"New pull request\" and select your fork and branch.\n - Provide a clear description of the changes and submit the PR.\n - Wait for review and approval before merging.\n\n### \u26a0 Editing Databases Directly\n\nWhile it's possible to edit database files manually in an IDE or on GitHub, this is not recommended unless you fully understand Profilarr\u2019s formatting and validation rules. Profilarr enforces constraints to ensure data integrity, and bypassing these safeguards can lead to:\n\n- Corrupted or invalid files that Profilarr cannot process correctly.\n- Unexpected behavior when syncing with Profilarr.\n- Inconsistent formatting, leading to rejected updates.\n\nTo make modifications, it's strongly advised to use Profilarr\u2019s built-in editing tools whenever possible. If direct edits are necessary, always validate the changes in a local instance of Profilarr before pushing them to the repository.", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.335274+00:00", "title": "Development Setup", "slug": "development-setup", "author": "santiagosayshey", @@ -79,7 +79,7 @@ { "_id": "edition", "content": "By default, Dictionarry's profiles prefer the ['Special' Edition](https://dictionarry.dev/formats/special-edition) of each movie. This is because these editions are often considered the more 'definitive' version of the movie because they contain the director's complete creative vision without studio interference or runtime constraints, and are often recommended over their theatrical counterparts.\n\n| Movie | Preferred Version | Reasons |\n| ----------------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Aliens (1986) | Special | James Cameron's Special Edition enhances the film with crucial character development, particularly the scenes about Ripley's daughter which add emotional depth to her relationship with Newt. While the theatrical cut has tighter pacing, the added content like the sentry gun sequences adds valuable world-building and tension. The colony scenes provide important context that enriches rather than spoils the story. |\n| Blade Runner (1982) | Final Cut | The Final Cut (2007) is considered the definitive version over theatrical, workprint, and Director's Cut releases. It removes the theatrical's controversial voice-over narration and \"happy ending\" that were studio-mandated and disliked by cast and crew. It preserves the original's ambiguous ending about Deckard's nature while fixing numerous continuity errors and technical issues. Key improvements include: cleaned up wire removal in spinner scenes, fixed lip sync in Zhora's death scene, digital correction of the obvious stunt double's face, properly matching the number of replicants mentioned to those shown, correction of the dove release scene's obvious day-for-night shooting, improved color timing that better matches Jordan Cronenweth's original cinematography, and restoration of the full unicorn dream sequence that better supports the film's central mysteries. While some defend elements of other versions (particularly the 1992 Director's Cut), the Final Cut represents Ridley Scott's complete creative vision with modern technical capabilities to properly realize it. |\n| The Lord of the Rings Trilogy (2001-2003) | Extended Editions | Each film's Extended Edition adds crucial character development, world-building and plot points that enrich the story: Fellowship adds the gift-giving scene and more Lothlorien. Two Towers expands Boromir/Faramir's backstory, adds Theodred's funeral for deeper Rohan culture. Return of the King adds the Witch King destroying Gandalf's staff, Saruman's fate, and House of Healing. The additional 30-50 minutes per film are so seamlessly integrated that many fans consider these the definitive versions. |\n| Batman v Superman: Dawn of Justice (2016) | Ultimate Edition | The 3-hour cut restores crucial plot threads that explain character motivations and fill plot holes. Added scenes show Superman actually helping people, Lex's manipulation of both heroes, and clearer reasons for the African incident blamed on Superman. The extended cut makes the story more coherent while better developing both protagonists' perspectives. |\n| The Abyss (1989) | Special Edition | The extended version restores a crucial tidal wave sequence that better explains the aliens' motivations and adds a stronger environmental message to the ending. Additional scenes provide more context for the NTIs (non-terrestrial intelligence) and their purpose, while expanding character relationships. Most notably, the restored ending gives the film a more impactful and complete conclusion that Cameron originally intended. |\n| Midsommar (2019) | Director's Cut | The 171-minute version adds key scenes that provide deeper insight into the relationship dynamics, particularly Christian's gaslighting of Dani. Additional folk-horror rituals and customs make the H\u00e5rga community feel more developed and their practices more grounded. The added character moments make the emotional climax more impactful. |\n| I Am Legend (2007) | Alternate Version | This version's different ending completely changes the meaning of the title and stays truer to Richard Matheson's novel. Instead of Smith's character killing himself to stop the creatures, he realizes they are actually intelligent beings protecting their own, making him the monster of their legends - their \"legend.\" This ending better serves the film's themes about humanity and perspective. |\n| Watchmen (2009) | Director's Cut | The 186-minute version adds essential character depth and crucial plot elements from the graphic novel, including more of Hollis Mason and his death scene. The extended cut better develops the complexity of the alternate 1985 setting and the moral ambiguity of its characters. The Ultimate Cut, which adds the Tales of the Black Freighter animation, is considered by some fans to be even more complete, though the Director's Cut is the most widely preferred version. |\n| Superman II (1980/2006) | The Richard Donner Cut | Released 26 years after the theatrical version, Donner's cut restores his original vision before he was replaced by Richard Lester. It removes the slapstick comedy, restores Marlon Brando's scenes as Jor-El, and features a different ending that ties better to the first film. The more serious tone and stronger character development make it the preferred version for most fans. |\n\nHowever, while special editions often expand and enrich films, theatrical versions have their own merits that many cinephiles and critics prefer. Theatrical cuts typically offer tighter pacing, maintain the mystery of intentional ambiguity, and preserve the historical significance of films as they were originally experienced by audiences. Here's why some prefer theatrical versions:\n\n| Movie | Preferred Version | Key Reasons |\n| --------------------------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Terminator 2: Judgment Day (1991) | Theatrical | The theatrical cut is nearly perfect in pacing and storytelling. The extended cut's additional scenes (like T-1000 glitching after freezing, John reprogramming the T-800) are interesting but unnecessary. The theatrical version maintains better tension and momentum. Most notably, the \"happy ending\" playground scene in the theatrical cut is preferred to the extended cut's darker alternate ending. |\n| Alien (1979) | Theatrical | The theatrical version is considered a masterpiece of pacing. The Director's Cut adds scenes that, while interesting (like Ripley finding Dallas in the cocoon), actually harm the rapid-fire tension of the final act. Scott himself has stated he prefers the theatrical cut. |\n| Star Wars (1977) | Theatrical | The original theatrical cut is considered more pure and less cluttered than later \"Special Editions\". Fans particularly dislike added CGI elements and the infamous \"Han shot first\" change. The pacing of the theatrical cut is also tighter. |\n| The Empire Strikes Back (1980) | Theatrical | Like A New Hope, fans strongly prefer the unaltered theatrical version. The Special Edition's added CGI and altered effects (like the Emperor hologram replacement, added windows in Cloud City) are considered unnecessary changes to a perfect film. The original practical effects and cinematography are considered superior. |\n| Return of the Jedi (1983) | Theatrical | The theatrical version is preferred over the Special Edition's controversial additions, particularly the changed ending music and added CGI celebration scenes. The \"Jedi Rocks\" musical number in Jabba's Palace is one of the most criticized Special Edition changes. The original Ewok celebration song \"Yub Nub\" is often preferred to the new ending. |\n| Apocalypse Now (1979) | Theatrical | While Redux (2001) and the Final Cut add interesting material, many feel the additions (especially the French plantation sequence) harm the pacing and dilute the core narrative. The theatrical cut maintains better tension and forward momentum. |\n| The Exorcist (1973) | Theatrical | \"The Version You've Never Seen\" adds the famous \"spider walk\" scene and several other moments, but the theatrical cut's pacing is superior. The original version better maintains its sense of building dread. |\n| Donnie Darko (2001) | Theatrical | The Director's Cut over-explains the film's mythology through added scenes and graphics, removing much of the mystery that made the original so compelling. The theatrical cut's ambiguity encourages viewer interpretation. |\n| Amadeus (1984) | Theatrical | The theatrical cut maintains better pacing and tighter focus on the central Salieri-Mozart conflict. Director's Cut adds 20 minutes of historical context and servant relationships that, while interesting, don't enhance the core psychological drama. The theatrical version better preserves the opera-like structure of the narrative. |\n| Payback (1999) | Theatrical | The theatrical version's blue-tinted color scheme better fits the neo-noir tone. The original ending with Kris Kristofferson provides a more satisfying conclusion than the Director's Cut (\"Straight Up\" version\"). Mel Gibson's voice-over is more engaging, and the slightly lighter tone makes Porter more sympathetic while maintaining the film's edge. Despite extensive studio interference, the theatrical cut became more commercially and critically successful. |\n| Almost Famous (2000) | Theatrical | While the \"Untitled: The Bootleg Cut\" adds interesting character moments and music scenes, the theatrical cut's tighter 122-minute runtime provides better pacing and more focused storytelling. Cameron Crowe's theatrical version better captures the whirlwind feeling of being on tour, while the 40 extra minutes in the extended cut, though enjoyable for fans, can make the journey feel too leisurely. |\n\nA [Custom Format: Special Edition (Unwanted)]() has been created to negate special editions for these specific movies, but does not yet work due to radarr/sonarr's parsing of release titles. The parsed 'Title' is removed from the release title, so you can't actually identify movies from custom formats (yet). Once this becomes possible, a single profile will be able to selectively prefer theatrical releases over special ones.\n\nTo mimic this behaviour in the current system, you have to copy the profile you want to use and set it's `Special Edition` score to the negative of whatever it was. Then apply the profile to whatever movie you want in it's theatrical version.", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.335274+00:00", "title": "Edition Philosophy", "slug": "edtion-philosophy", "author": "santiagosayshey", @@ -94,7 +94,7 @@ { "_id": "home", "content": "# \ud83d\udc4b Hey!\n\nWelcome to Dictionarry! This project aims to wiki-fy and **simplify media automation** in Radarr / Sonarr through extensive, data driven documentation, custom formats and quality profiles.\n\n## \ud83d\udca1 Motivation\n\nNavigating the world of media automation and coming across quality terms like \"Remux\", or \"HEVC\" or \"Dolby Vision\" can be quite daunting when all you want to do is setup a media server to watch some content. It often **feels like you need a masters in audio / video just to grab the latest blockbuster.** Dictionarry aims not to explain these concepts in detail, but **abstract them into more approachable ideas** that don't require extensive knowledge or experience.\n\nDictionarry leverages two key features of Radarr and Sonarr to simplify media automation:\n\n1. Custom Formats - Think of these as smart filters that scan release titles for specific patterns. They help **identify important characteristics** of your media, such as:\n\n - Video quality (4K, HDR, Dolby Vision)\n - Audio formats (Atmos, DTS, TrueHD)\n - Source types (Remux, Web-DL, Blu-ray)\n - Potential issues (upscaled content, poor encodes)\n\n2. Quality Profiles - These act like a scoring system that **ranks releases** based on their Custom Format matches. You can:\n - Prioritize what matters most to you\n - Automatically upgrade to better versions\n - Avoid problematic releases\n\nThink of Dictionarry as your personal car-buying expert: Instead of researching every technical specification and test-driving dozens of vehicles, you get access to a curated showroom of pre-vetted options that match what you're looking for. Whether you want:\n\n- 2160p Remux - **Maximum Quality** 4K HDR remuxes with lossless audio and Dolby Vision\n- 2160p Quality - **Transparent 4K** HDR encodes selected using the Encode Efficiency Index\n- 1080p Quality - **Transparent 1080p** encodes optimized using the Golden Popcorn Performance Index\n- 1080p Efficient - **Efficient x265 1080p** Encodes optimized to save space using the Encode Efficiency Index\n\n![Profile Preview](https://i.imgur.com/nZQzN9I.png)\n\nDictionarry's database of tested profiles and formats handles the technical decisions for you.\n\n## \u2699\ufe0f Profilarr\n\nThe database by itself does nothing. Custom Formats and Quality Profiles **need to be imported** and configured in your individual arr installations. Rather than leaving you to manually create everything yourself based on our guides, we've created **Profilarr** to automate this process.\n\nProfilarr is a **configuration management tool** for Radarr and Sonarr that can interface with **ANY remote configuration database** (not just Dictionarry's!). It automatically:\n\n- **Pulls** new updates from your chosen database\n- **Compiles** the database format into specific arr formats\n- **Imports** them to your arr installations\n- Manages version control of your configurations\n\nBuilt on top of git, Profilarr treats your configurations like code, allowing you to:\n\n- Track changes over time\n- Maintain your own customizations while still receiving database updates\n- Resolve conflicts between local / remote changes when they arise\n\nThe architecture was specifically built like this to **put user choice first**. We believe that:\n\n- **Your media setup should reflect your needs, not our opinions**\n- Updates should enhance your configuration, not override it\n- Different users have different requirements (storage constraints, hardware capabilities, quality preferences)\n- The ability to customize should never be sacrificed for convenience\n\nProfilarr empowers you to use Dictionarry's database (or anyone elses!) as a foundation while maintaining the freedom to adapt it to your specific needs.\n\n## \ud83d\udd28 Development Notice\n\nProfilarr 1.0.0 is out now in open beta! https://dictionarry.dev/wiki/profilarr-setup", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.335274+00:00", "title": "home", "slug": "home", "author": "santiagosayshey", @@ -107,7 +107,7 @@ { "_id": "profilarr-casaos", "content": "This guide will walk you through the process of installing Profilarr as a custom app in Casa OS.\n\n## Prerequisites\n\n- A working Casa OS installation (this guide uses v0.4.15).\n- Basic knowledge of using the Casa OS interface.\n- Access to [https://github.com/Dictionarry-Hub/Profilarr](https://github.com/Dictionarry-Hub/Profilarr) for install file.\n\n## Step-by-Step Installation\n\n1. **Add a Custom App to Casa OS:**\n - Open your web browser and navigate to your Casa OS dashboard.\n - Find and click on the \"+\" icon in the top right corner of the App section.\n - Select \u201cInstall a customized app\u201d\n - Select \u201cImport\u201d in the top right corner of the Settings page\n2. **Import Docker Compose File:**\n - Navigate to [https://github.com/Dictionarry-Hub/Profilarr](https://github.com/Dictionarry-Hub/Profilarr)\n - Scroll down to the \u201cInstallation\u201d section\n - You will see a **Docker Compose (recommended) **code block\n - Copy the Docker Compose file code\n - Navigate back to Casa OS to the Import Docker Compose page and paste the code into the empty text box\n - Note: if you are not contributing to a database, delete the following section or Casa OS will throw an error that the file is missing:\n - `env_file:`\n - `- .env # Optional: Only needed if contributing to a database`\n - Click on \u201cSubmit\u201d and click \u201cOK\u201d to the warning\n3. **Profilarr App Details:**\n - You can leave most settings as default unless you have a specific reason to change them, like customizing to your network/system (Network, Port, Volumes, etc..) otherwise just change your Time Zone in Environmental Variables\n - **Name:** \u201cProfilarr\u201d - but you can change it if you want\n - **Icon:** (Optional) You can upload an icon for the app.\n - **Web UI:** Should be your host device IP address\n - **Network:** Should be bridge\n - **Port:** Should be 6868 TCP\n - **Volumes:** Leave this as default unless you want to change the host path to a specific location\n - **Environment Variables:** (Only TZ is required, the others are optional)\n - TZ = Your Timezone (e.g., America/New_York)\n - GIT_USER_NAME = GitHub username for contributing\n - GIT_USER_EMAIL = GitHub email for contributing\n - PROFILARR_PAT = GitHub Personal Access Token for contributing\n4. **Install the App:**\n - Once you've filled in all the necessary details, click on the \"Install\" button.\n5. **Wait for Installation:**\n - Casa OS will now download and install the app. This might take a few minutes.\n6. **Access Profilarr:**\n - After installation is complete, you should be able to find Profilarr on your Casa OS dashboard. Click on it to launch the app.", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.335274+00:00", "title": "Casa OS - Profilarr Installation Guide", "slug": "profilarr-casaos", "author": "lawgics", @@ -125,7 +125,7 @@ { "_id": "profilarr-setup", "content": "Profilarr is a **custom format / quality profile management tool** that acts as a middleman between a configuration database and your radarr/sonarr installations. It automatically:\n\n- **Pulls** new updates from your chosen database\n- **Compiles** the database format into specific arr formats\n- **Imports** them to your arr installations\n- Manages **version control** of your configurations\n\n## Installation\n\nProfilarr follows the GitFlow workflow for development:\n\n- New features are first merged into the `develop` branch for testing\n- Once stable, these features move to the `main` branch\n- For early access to new features, use `santiagosayshey/profilarr:beta`\n- For stable use, use `santiagosayshey/profilarr:latest`\n\nOnce installed, you can visit the web UI at `http://[address]:6868` and begin the setup process.\n\n### Docker\n\n#### Docker Compose (recommended)\n\n```yaml\nservices:\n profilarr:\n image: santiagosayshey/profilarr:latest # or :beta\n container_name: profilarr\n ports:\n - 6868:6868\n volumes:\n - /path/to/your/data:/config\n environment:\n - TZ=UTC # Set your timezone\n env_file:\n - .env # Optional: Only needed if contributing to a database\n restart: unless-stopped\n```\n\n#### Docker CLI\n\n```bash\ndocker run -d \\\n --name=profilarr \\\n -p 6868:6868 \\\n -v /path/to/your/data:/config \\\n -e TZ=UTC \\\n --env-file .env \\ # Optional: Only needed if contributing to a database\n --restart unless-stopped \\\n santiagosayshey/profilarr:latest # or :beta\n```\n\n#### Volumes\n\nWhen configuring the volume mount (`/path/to/your/data:/config`):\n\n- Replace `/path/to/your/data` with the actual path on your host system\n- **Windows users:** The database is case-sensitive. Use a docker volume or the WSL file system directly to avoid issues\n - Docker volume example: `profilarr_data:/config`\n - WSL filesystem example: `/home/username/docker/profilarr:/config`\n\n### CasaOS\n\nView lawgics' CasaOS setup guide [here:](https://dictionarry.dev/wiki/profilarr-casaos)\n\n### Development\n\nIn addition to being a 'sync' tool for end users, Profilarr also acts as a development platform for people to work on, and contribute to, a remote database. Read [here](https://dictionarry.dev/wiki/development-setup) to learn more on how to setup Profilarr for development.\n\n## Usage\n\n### Credentials Setup\n\nThe first time you visit the web UI at `http://[address]:6868`, you'll be prompted to setup login credentials.\n\n- Make sure you keep note of these credentials, as you won't be able to reset the password if you forget it later on (unless you have access to the filesystem and can interact with the docker container.)\n\n![](https://i.imgur.com/uhZWeHe.png)\n\n### Configuration Workflows\n\nOnce you've setup your user credentials you can start working on your media configurations. You have the choice to either:\n\n1. Connect to an external database, make changes, receive updates and handle change conflicts.\n - This is what most people will be using if they don't want to build configurations from scratch.\n2. Use Profilarr completely locally, without a database.\n - This option is left for people who want the advantages of Profilarr's compilation system (single definition profiles, tweaks, better management, etc), but don't want to be tied to any one database. Skip ahead to [Making Changes](#making-changes)\n\n#### Connecting to a Database\n\nProfilarr leverages Git to create an open-source configuration sharing system. To get started, navigate to `Settings -> Database`, and link a repository.\n\n![](https://i.imgur.com/OpArP4z.png)\n\n| # | Feature | Description |\n| --- | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| 1 | Database information | Contains basic information about the database - Name, Owner, Stars/Issues/PRs |\n| 2 | Status Container | - View outgoing changes (any local changes you've made to the database)
- View incoming changes (any changes pushed to a remote database that haven't been applied to your local one)
- View merge conflicts (when you've made changes to a file that also has incoming changes) |\n| 3 | Commit / Change Log | - View logs of all prior changes applied to your database
- If your HEAD is out of date with the remote, it will only show commits after the commit diversion |\n| 4 | Unlink Repo | - Remove the currently linked repo
- Choose to either keep the current files and stop receiving updates
- Or remove all files and sync to a completely different database instead |\n| 5 | Current Branch | - Databases may choose to maintain stable / beta versions of their configurations via branches
- You would choose your preferred configuration path here (must will just use stable) |\n| 6 | Auto Sync | - Option to let Profilarr automatically pull in new updates without consulting you first.
- Useful if you want to connect to a database, receive updates and forget about it after
- If a pull causes a merge conflict, Profilarr will pause mid merge and let your resolve the conflicts manually before continuing |\n\n**NOTE**: The database must adhere to the Profilarr standard format to work correctly with Profilarr (ie configurations must be made / edited inside profilarr and not externally).\n\n- Profilarr does not ensure that every public database will adhere to this format, nor work properly with them (only our own - the Dictionarry database).\n\nThe following sections will use the [Dictionarry Database](https://github.com/Dictionarry-Hub/database) for demonstration purposes.\n\n#### Getting Updates\n\nDatabases are likely to change overtime; they might receive new features such as edition formats, or new quality profiles targeting anime releases. They might fix bugs with regex patterns, or improve descriptions and tags. Since Profilarr connects to a Git repository, it can take advantage of Git's version control capabilities to show when your local database is out of sync with the remote database.\n\nWhen updates are available, Profilarr will display them in the Status Container section of the Database page (provided you don't have auto pull enabled):\n\n![](https://imgur.com/gimLQU7.png)\n\n1. **Incoming Changes**: Shows all changes that have been pushed to the remote database but haven't yet been applied to your local installation\n - Each change will show a single file each\n - Changes will usually be marked as tweaks, additions, removals, renames, etc.\n - You can the 'View Changes' button, which will open a modal that shows the associated commit + message, and the exact fields that have changed\n\n![](https://i.imgur.com/qjfqMfQ.png)\n\n2. **Update Process**:\n\n - Click the \"Pull Changes\" button to apply all incoming changes to your local database\n - Profilarr will automatically merge these changes with your local setup\n - If you've enabled Auto Sync in settings, these updates will be applied automatically\n - Once pulled, your database will go back to being in sync\n - It is currently not possible to pick and choose updates yet, but this feature will be looked at in future\n\n3. **Update History**:\n - All successfully applied updates are logged in the Commit/Change Log section\n - This provides a complete history of changes applied to your database\n - You can use this log to track when specific features were added or modified\n - While technically feasibly, Profilarr does NOT allow you to go back to a certain commit for interoperability reasons.\n\n#### Making Changes\n\nDatabases are meant to act as 'starting points' for your setup:\n\n- Some may be broad and have a variety of profiles to use\n- Others might be incredibly niche and focus on small but important philosophies.\n- Even Dictionarry's database, that aims to be both broad and niche at the same time is also just a starting point.\n\nYou have the power to make changes to _whatever_ you want, and still receive updates from a database. To make changes, you simply interact with the configs you want to change and save them - just as you would in Radarr / Sonarr.\n\n- You can change file names, regex patterns, descriptions, format scores, quality groups - whatever you want.\n- You can view these changes in the database tab just as you would see incoming changes.\n\n![](https://i.imgur.com/m0t5u3C.png)\n\nFrom this point, you have a few choices. You can either:\n\n- **Revert changes.** Have you ever made changes to your quality profiles and wanted to change it back but couldnt because you couldn't remember what it used to be? Well since we operate within Git, you can revert a file back to it's previous 'stable' state using `git revert`. It's as simple as pressing a button now.\n- **Commit Changes**. When you're satisfied with your modifications and want to preserve them, you need to stage and commit them to your local Git repository. This creates a permanent record of your customizations that Profilarr can reference when pulling updates from the remote database.\n\n![](https://i.imgur.com/RTvo2Ud.png)\n\n| # | Action | Description |\n| --- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| 1 | Stage | - Marks modified files to be included in your next commit
- This is the preparation step before saving changes permanently
- You can select which specific files to stage, allowing you to group related changes together
- Staged files appear in a separate section in the interface
- Files must be staged before they can be committed (Git's two-phase commit process ensures you review changes before finalizing them) |\n| 2 | Unstage | - Removes files from the staging area that you previously staged
- Useful when you accidentally stage files or decide not to include certain changes in your commit
- The file remains modified in your working directory, but won't be included in the next commit
- You can only select and unstage files that are currently in the staging area |\n| 3 | Commit | - Permanently saves all staged changes to your local Git repository
- Requires a commit message that describes what changes were made and why
- Creates a checkpoint you can revert to later if needed
- **Important**: All staged files will be committed, not just selected ones
- After committing, these changes become part of your local configuration history
- This is the crucial step that allows Profilarr to track your customizations separately from the original database |\n| 4 | Revert | - Returns a file to its previous state before your modifications
- Especially useful when you've made changes you no longer want to keep
- You can only revert non committed changes
- This preserves the history of changes while effectively canceling out unwanted modifications |\n| 5 | Push | - Sends your local commits to the remote database
- **Only relevant for database contributors and developers**
- Requires appropriate permissions to the remote repository
- Regular users don't need to worry about this action |\n\n##### Why Commits?\n\nYou might wonder: \"Why do I need to manually stage and commit changes? Why doesn't Profilarr just save them automatically?\" The answer lies in Profilarr's core philosophy of balancing customization with ongoing updates:\n\n**Breaking the \"All or Nothing\" Model**: Traditional tools force you to choose - either use their configurations exactly as provided, or be cut off from future updates once you make changes. When you commit in Profilarr, you're creating clear markers that tell the system \"these parts are my customizations.\" This allows Profilarr to know exactly which parts to preserve when new updates arrive and which parts can be safely updated.\n\nTechnically, Git is creating snapshots of your configurations at specific points in time. When you commit changes, Git records the exact differences between the original file and your modified version. Later, when pulling updates, Git analyzes these differences alongside the incoming changes and intelligently determines how to combine both sets of modifications without losing either. Without these explicit commit markers, there would be no reliable way to perform this merge operation.\n\nWhile Profilarr could theoretically automate the staging and committing process, we've deliberately kept it manual. This is because Profilarr also serves as a development platform, and developers need precise control over when and how their changes are saved. Automatic commits would be frustrating for database contributors who are testing various configurations and don't want every experimental change permanently recorded. This manual approach gives both end users and developers the flexibility they need without compromising functionality.\n\nWhile the extra step might seem clunky at first, it's the mechanism that enables Profilarr's unique ability to let you personalize configurations while still receiving ongoing improvements. The alternative would be returning to the \"use our configs exactly as provided or you're on your own\" approach of other tools.\n\n#### Handling Merge Conflicts\n\nEven with Git's intelligent merging, sometimes you'll encounter situations where both you and the remote database have modified the same parts of the same files. When this happens, Profilarr needs your help to determine which changes to keep.\n\n##### When Conflicts Occur\n\nMerge conflicts might arise in such scenarios like this:\n\n- You've customized a quality profile to allow AV1 encodes\n- Meanwhile, the remote database has updated the same profile to allow AV1 encodes, but at a reduced score pushed up by other formats\n- Both changes affect the same file.\n\nWhen incoming changes affect files you've modified, Profilarr will mark them with a \"Potential Conflict\" label in the Status Container's incoming changes.\n\n![](https://i.imgur.com/JS8gfn4.png)\n\nWhen you attempt to pull these changes, the database will enter a \"Merge Conflict\" state.\n\n- At any point, you can choose to abort the merge and go back to your previous database state.\n- You will not however, be able to pull in any new updates until the merge conflict has been resolved.\n\n![](https://i.imgur.com/miuLkzw.png)\n\n##### Resolving Conflicts\n\nIn the Merge Conflict state:\n\n1. Profilarr prevents you from making changes to other files until all conflicts are resolved\n2. The interface displays each conflicting field side-by-side, showing \"Yours\" (your version) and \"Theirs\" (remote version)\n3. You must resolve conflicts field-by-field, file-by-file\n4. For each field, you choose whether to keep your version or adopt the remote changes\n5. After resolving a conflict (but before completing the merge), you can edit your choices in case you change your mind\n\n![](https://i.imgur.com/bJH7dJr.png)\n\nHere, the user has chosen to:\n\n- Accept the incoming changes for two custom formats (360p and 2160p Quality Tier 5)\n- Keep their local score change for AV1\n\n##### After Resolution\n\nOnce you've resolved all conflicts for all files, you can commit the merge changes:\n\n![](https://i.imgur.com/bd5hjBr.png)\n\n1. Non-conflicting files that were part of the pull are automatically merged\n2. Your resolved files maintain the exact choices you made during conflict resolution\n3. Your local database returns to a \"in sync\" state with the remote\n4. Normal operations can resume until the next update or change\n\nThis process ensures you get the best of both worlds - keeping your important customizations while still benefiting from improvements in the remote database. While it may seem complex at first, this approach gives you complete control over how updates are integrated with your personalized setup.\n\n#### Profilarr Quirks\n\nProfilarr has made some changes to the way custom formats and quality profiles are built. Here's a basic overview of the biggest differences compared to standard Radarr/Sonarr configurations:\n\n| Feature | Description |\n| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Reusable Regex Patterns | - Regex patterns are now separate from custom formats and referenced by name
- This allows reusing the same pattern in multiple places
- Changes to a pattern automatically apply everywhere it's used
- At compile time, pattern names are resolved to their actual regex expressions for the \\*arr apps |\n| Conditional Format Import | - Custom formats with a score of 0 are not included in profiles (unless specifically added in selective mode)
- This helps keep your profiles cleaner by excluding unused formats |\n| Enhanced Sorting | - Additional methods for sorting, scoring, and searching files |\n| Language Handling | - Complete overhaul of language management
- All profiles set language to \"Any\" and use language custom formats based on preferences
- Options include:
\u2022 \"Any\" - No language filtering
\u2022 \"Must Include\" - Ensures releases contain at least your preferred language
\u2022 \"Must Only Be\" - Ensures releases contain ONLY your preferred language |\n| Documentation-Focused | - Tags and descriptions are stored in Profilarr but removed during compilation
- These elements are purely for documentation and organization |\n| Integrated Testing | - Regex patterns and custom formats include testing functionality
- Used in continuous integration to ensure changes don't break existing functionality
- Helps maintain compatibility as configurations evolve |\n| Single Definition | - Profiles and custom formats are defined once in Profilarr
- Automatically converted to appropriate Radarr/Sonarr syntax at compile time
- Eliminates need to maintain separate definitions unless different logic is required |\n\n#### Git Gud\n\nProfilarr attempts to make Git accessible to all users. However, there are some aspects of it that can't be completely simplified or safeguarded against. Understanding these key concepts will help you avoid common pitfalls and get the most out of the system, even if you've never used Git before.\n\n| Topic | Guidance |\n| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Commit Messages | - Write clear, descriptive commit messages that explain what you changed and why
- Good messages help you track your history and understand changes months later
- Examples: \"Adjusted AV1 score to prioritize quality over filesize\", \"Added support for anime dual-audio formats\" |\n| Avoiding File Deletion | - Deleting files should be a last resort, not a go-to solution
- When you delete a file that exists in the remote database, it will cause merge conflicts when that file receives updates
- Instead of deleting, consider:
\u2022 Disabling formats you don't want to import
\u2022 Renaming files to indicate they're not in use
\u2022 Using comments to note why you're not using certain configurations |\n| Commit Size | - Smaller commits that focus on specific changes are easier to manage
- They make conflict resolution simpler when conflicts occur
- Example: Commit changes to anime profiles separately from changes to movie profiles |\n| Reviewing Changes | - Always review what you're about to stage using the \"View Changes\" feature
- Make sure each change is intentional and correct
- This helps prevent accidental modifications from being committed |\n| Backups | - Before making significant changes, consider exporting your configurations
- This provides a fallback if something goes wrong
- Most issues can be resolved, but having a backup gives peace of mind |\n| Abandoned Changes | - If you have unstaged changes you no longer want, use the \"Revert\" option
- Don't leave unwanted changes hanging around - they'll complicate future operations |\n\n### Importing\n\nOnce you've setup your media configuration workflow you can setup external apps which Profilarr will attempt to sync with. You need to setup:\n\n![](https://i.imgur.com/2ZqjGKg.png)\n\n#### Type / Server\n\nThere can sometimes be API changes that break Profilarr's import functionality, so version limits on the apps it can import to are enforced - these are often rare and are usually fixed quickly.\n\n#### Sync Settings\n\n| Sync Method | Description |\n| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Manual | - Go to the format/profile page and enter select mode (button in top right toolbar or Ctrl+A)
- Select specific files you want to import and where you want to import them
- Gives you full control over what configurations are synced to which applications
- Best for users who want to carefully manage what gets imported |\n| On Pull | - Automatically syncs selected files whenever the database receives an update
- When combined with Auto Pull, allows Profilarr to work completely autonomously |\n| On Schedule | - Similar to On Pull, but runs on a schedule of your choosing
- Set specific times/intervals for Profilarr to check for changes and import them
- Useful for controlling when system resources are used for synchronization
- Good compromise between automation and control
- Creates a scheduled task that you can also trigger manually anytime you want |\n| Import as Unique | - Works with any of the sync choices above
- Appends a unique identifier to imported files
- Allows you to use your Profilarr database alongside different tools/configs
- Example: Run TRaSH guides + Notifiarr configurations simultaneously with your Profilarr configs
- Prevents name conflicts when using multiple configuration sources |\n\n#### External App Setup\n\nIn future updates (hopefully soon), Profilarr will handle a quick setup sync (changing media management, quality slider settings, etc), but for now you need to change these things manually.\n\n| Setting | Recommendation | Explanation |\n| ------------------- | -------------------------- | --------------------------------------------------------------------------------------------------- |\n| Propers and Repacks | Set to \"Do Not Prefer\" | Other options will override custom formats and make Radarr/Sonarr grab things we don't want |\n| Quality Sliders | Set min/max for everything | Custom formats will do 99% of the ranking and using any other settings just gets in the way usually |\n\n![](https://i.imgur.com/IyJLvfR.png) ![](https://i.imgur.com/zws00bj.png)", - "last_modified": "2025-03-26T11:42:47.250284+00:00", + "last_modified": "2025-04-01T13:14:44.335274+00:00", "title": "Profilarr Setup", "slug": "profilarr-setup", "author": "santiagosayshey",