Files
database/bundles/dev_logs.json
GitHub Action 0b14229132 Update bundles
2025-08-20 00:28:45 +00:00

116 lines
32 KiB
JSON

[
{
"_id": "Architecture Overhaul",
"content": "Hey @everyone, here's a small update on what I've been working on lately:\n\nAs the project has grown bigger, it's gotten quite difficult to keep track of and manage a billion different custom formats, quality profiles, etc. To help improve development productivity, I've planned a complete overhaul of Dictionarry's architecture. This starts with separating things into modules - namely a separate database which powers the website and the profilarr tool.\n\nNext up is standardizing the actual entries inside the database. The biggest issue in development right now is making / editing / updating the same thing multiple times. If you have the same regex pattern for multiple CFs, it needs to be updated for each one of them. Quality profiles across different apps have miniscule differences in syntax (eg. web-dl in radarr vs web in sonarr), which means we need multiple files with tiny differences.\n\nWorking in this system is extremely error prone and time consuming. To fix this, I'm creating a standard unique to dictionarry based on a **single definition format**, i.e. Regex patterns, Custom Formats and Quality Profiles are defined once, and repeated in other places using foreign keys. I don't know exactly _how_ this will look, but the plan is simplicity above all. Outside of improving productivity, I hope this standard helps encourage people who feel less confident with custom formats / quality profiles make more intuitive changes to their own setups.\n\nNow, the problem with this new and improved standard is - the arrs won't be able to read the files anymore. Solution: A compiler! This is where the fun begins; we take our simple, easy-to-develop-for files and push them through the compiler. Out pops the required syntax, with those weird naming rules (web-dl for radarr, web for sonarr), without the developer needing to ever worry about it!\n\nHere's a canvas page I made in Obsidian which visualizes this architecture:\n\n![Archiecture Diagram](https://i.imgur.com/HcXFNHU.png)\n\n# Profile Selector\n\nHere's an updated look at the new profile selector (WIP) in action. I'll leave explaining the selection algorithm for another day (because I'm still not quite happy with it), but I think it's still pretty cool to look at as is.\n\n![Selection Algorithm v1](https://streamable.com/bhi7h6)",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Architecture Overhaul",
"slug": "architecture_overhaul",
"author": "santiagosayshey",
"created": "2024-8-13",
"tags": [
"devlog",
"architecture"
]
},
{
"_id": "Modular Choices",
"content": "Hey @everyone, here's a small (but very important) post on the new update system!\n\n## Current Profilarr\n\nCurrently, there is 0 support for updates in Profilarr. This is obviously not ideal; it's a nightmare to keep up to date with changes and almost certainly breaks any custom changes you make.\n\n## Profilarr v1\n\nUsers will be able to view incoming and outgoing changes, as well as resolve any conflicts between the two. To achieve this, a user friendly GUI has been built on top of Git's merge functionality and allows fine control over what should be merged / ignored. More specifically, this functionality allows us to make custom changes and choose to retain them once a new update comes around.\n\n- As an example, let's say you've made the Dolby Vision custom formats negative because your TV doesn't support it. A new update has come out which shuffles around HDR scores, and this leads to a merge conflict between the two custom format scores.\n- In the settings page, you can choose to accept the incoming change or retain your local changes. Profilarr will 'remember' your choice and stop prompting you to update this custom format until a new update comes out, in which case, the situation repeats. Keep local or accept incoming.\n\n### Settings Page\n\nProfilarr now includes a dedicated page for 'Sync Settings'. It allows you to link / unlink a database repository, view and change branches as well as deal with incoming / outgoing changes and their conflicts. This page has been planned for developers too; you can add an authenticated github dev token to your environment and you have the ability to make changes directly to Profilarr's database (not to stable, obviously).\n\n# Beta Release\n\n- Still not quite ready yet, but I'm working hard to get it out! Stay tuned :hearts:\n\nHere's a screenshot of this new Conflict Resolver in action (Ignore the date modified row, it will be removed for actual use)\n\n![Conflict Resolver](https://i.imgur.com/0EZrumU.png)",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Modular Choices",
"slug": "modular_choices",
"author": "santiagosayshey",
"created": "2024-12-3",
"tags": [
"devlog",
"architecture",
"user_choice"
]
},
{
"_id": "Profilarr is in Beta \ud83d\ude80",
"content": "hey @everyone, long awaited dev log :)\n\n## What's New? \ud83d\udc48\n\nMany people are already aware, but I thought I should formally announce here on discord that **Profilarr is out in beta!** I've been working on it since around July last year and put in a massive effort over the Christmas break to get it working. Even though it's not nearly as stable as I would like it to be, it implements the core architecture I first talked about [here](https://dictionarry.dev/devlog/architecture_overhaul). There is still so (x10) much to be done in terms of bugs & polish & new features, but I'm happy sharing it as is. Hopefully you can all find some benefit in using it too :) \n\nYou can read our setup guide [here](https://dictionarry.dev/wiki/profilarr-setup). It's available as a community app on Unraid, and as a Docker image for both ARM (Apple Silicon, Raspberry Pi) and x86.\n### Database \ud83d\udcbe\n\nAlong with Profilarr, the Dictionarry database has also got an overhaul. We introduced the new encode efficiency index, 2160p Quality and Balanced profiles as well as other small improvements like editions, repacks and freeleech. Here are some scattered thoughts that you might also be interested in: \n- @Seraphys has been working on a scoring refactor that introduces 720p fallback, fixes streaming service names, and groups similar releases together better. It's a huge change that I haven't been able to fully test myself, but I've merged it into a separate branch because I know people are pretty antsy to start testing themselves. Anyone is free to give it a try, you just have to switch to the `scoring-refactor` branch in Profilarr. Please direct any issues / improvements to the database's [Issue Tracker](https://github.com/Dictionarry-Hub/database).\n- I'm personally not too happy with the state of the current database - poorly named files and renames/imports weren't taken into enough consideration and it's causing way too many download loops. I'm still trying to figure out exactly how I want to tackle these problems but I just want people to know that it is on my mind and it will be improved in future. \n\n### Tweaks \ud83d\udd27\n\nI talked about tweaks in detail [here](https://dictionarry.dev/devlog/profile_tweaks) and had actually implemented some of them into Profilarr, but decided to remove them at the last minute. On paper, it's an interesting system. In practice, it's confusing and really hard to program for. It's meant to be a database agnostic feature, but was hardcoded into Profilarr's profile system. I'm going to keep this feature on the roadmap as a maybe for now, but I'm going to have to completely rethink how to implement it from the ground up. \n\n## What's Next? \ud83d\udc49\n\nHere's a (non comprehensive) list of what you can expect me to work on now that Profilarr is in beta. \n\n### Profilarr\n\n- Media Management Sync - Databases will be able to implement their own media management settings (quality sliders, rename templates, delay profiles, etc) and use profilarr to sync them\n- Multi Database Support - Refactoring the database to use a dependency system that allows databases to act as layers and depend on layers above them. This lets profile databases exist independently of format databases and that independently of regex databases. This way, you'll be able to connect to multiple at once and build off them as you please (or just link a complete one). \n- Everything on the issue tracker: https://github.com/Dictionarry-Hub/profilarr/issues\n\n### Database\n\n- Efficiency Profiles - 1080p Efficient (10%), 1080p Efficient (22.5%) and 2160p Efficient will use the [Encode Efficiency Index](https://dictionarry.dev/wiki/EEi) to prioritise HEVC releases. \n- Anime Support - Likely just quality profiles, but I also want to explore alternative options that better support dynamic needs. We likely want to make release group tiers, but also figure out a way to prioritise releases from newer & better sources. I'm not personally into that much anime, so I'm going to need as much input as I can get from you guys ~ please start those conversations if you want something to be considered (some have already asked, I'll get back to you when I can!)\n- Better Streaming Service Grab Logic - This is already partially improved in Seraphys' refactor, but I would also like to add support for more streaming services and revise the interaction between release groups and sources. \n\n## Housekeeping \ud83e\uddf9\n\nWe've had an influx of new members over the past couple weeks, so I'd like to welcome you all to our discord \ud83d\udc4b Come say hey in #general if you haven't already. \n\n### Moderation, Wiki, Support \ud83e\udd1d\n\n- I'd like to introduce @Seraphys as our first moderator and designated detail devotee \ud83e\udd23 Big claps all around. \n- The rules, faq, links (among others) are very out of date and will be getting a refresh soon, stay tuned for those updates. \n- I will likely be closing the support post channels soon and replacing them with a single, simpler text channel and removing the bot integration. For any basic support, please message us over there, but for any major issues please redirect your queries to our issue trackers on GitHub from now on. [here](https://github.com/Dictionarry-Hub/profilarr/issues) and [here](https://github.com/Dictionarry-Hub/database)\n\n### Donations \ud83d\udcb8\n\nIf you've donated and would like a special 'Donor' role badge here on discord, please shoot me a PM. \n\n### Taking a Break \u23f8\ufe0f\n\nI want to let everyone know that I'll be taking a break for a little while ~ I spent the majority of the past 4-5 months working on Profilarr and I'm quite burnt out. I'm trying very hard to balance full time study with development, but they unfortunately just don't mesh the way I hoped they would. I can't not work at 100% for either, so something had to give and for the past month or so, that's been my sleep and sanity. I unfortunately can't delay my semester (as much as I want to), so I'm going to have to dial down the time I spend on Dictionarry/Profilarr. I think I'm going to do a proper break (no dev at all) for a couple weeks at least ~ until my easter break, then I'll slowly pick up speed again. Couple of specific points I want to mention here:\n- I'm going to stop giving ETAs for things. They always take longer than I expect them to, which puts pressure on me and probably disappoints you guys when something inevitably doesn't happen on time. The defacto answer to any ETA questions from now on will be \"when it's ready\". \n- I've been pretty scatterbrained lately, so if someone is waiting on a message from me just know that I haven't forgotten about you and will get back when I have the time. If it's been a while, shoot me a PM or something as a reminder ~ I'll still be active on discord during my break. \n\n### Thank You \ud83d\ude4f\n\nThis project has grown tremendously in scope in the last year and that's not possible without a community, so big thanks from me to all of you. I'm still figuring all of this out as I go along so it's kind of unbelievable how many people are using a tool that once only existed in my head. \n\nCheers, everyone.",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Profilarr is in Beta \ud83d\ude80",
"slug": "profilarr_is_in_beta",
"author": "santiagosayshey",
"created": "2024-1-4",
"tags": [
"devlog",
"profilarr",
"database",
"housekeeping"
]
},
{
"_id": "Profile Selector v3",
"content": "hey @everyone , thought I'd make a channel to share some development logs.\n\nI've been feeling pretty inspired code wise the past few days, so I've actually made some progress despite saying I would take a break...\n\nAnyways, after designing Profile Selector v3 in Figma for the past couple months, I started work on actually implementing it. Let me tell you that drawing shapes is much, much easier than coding them. After a couple days of regretting not paying attention in high school trigonometry, I have the basic functionality in place! We have three data points which represent each of the requirements - quality, efficiency, compatibility. The user can select points on each of the axes, and each combination is used to recommend a profile. It's not hooked up to the database yet, so random strings are being used as a placeholder.\n\nThe good thing about this design is that it's really modular. Once I finish the 'beginner' version of it, I'll be able to add an advanced mode which can be used to select any kind of requirement. Resolution, HDR, Audio, etc.\n\nHere's how it looks right now (obvious disclaimer that final version will look much much better):\n\n![Selector Proof of Concept](https://streamable.com/2uprnl)\n\nHere's a funny tidbit from development:\n\nI tried writing some animation styling to make the inner polygon look like its stretching (as opposed to instant, static movement). It didn't quite work..\n\nBehold: Frankenstein's Triangle.\n\n![Frankenstein's Triangle](https://streamable.com/z70sj8)",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Profile Selector v3",
"slug": "profile_selector_v3",
"author": "santiagosayshey",
"created": "2024-6-2",
"tags": [
"devlog",
"profile_selector",
"website"
]
},
{
"_id": "Profile Tweaks",
"content": "Hey @everyone, I've been hard at work on the next Profilarr version over the past few weeks and have new stuff to show off!\n\nThe profiles we make are meant to be (really good) starting points, not a strict standard on what you _should_ be grabbing. Up until now, profiles existed as singular entities that don't respect custom changes. Merge conflict resolution was a big step in the right direction for this (read more in the last dev log), but it's a bit more hands on, and not something I expect most people to engage with.\n\nEnter 'Profile Tweaks'. These are simple check boxes you can enable / disable and are unique to YOUR profiles. They will ALWAYS be respected, regardless of what updates we make to the base profile. For now, these tweaks include:\n\n- Prefer Freeleech\n- Allow Prereleases (CAMS, Screeners, etc)\n- Language Strictness\n- Allow Lossless audio\n- Allow Dolby Vision without Fallback\n- Allow bleeding edge codecs (AV-1, H266)\n\n(Some are only available for specific profiles, eg lossless audio for 1080p Encode profiles).\n\nIf anyone has any tweak ideas (even super specific ones), please let me know and I'll work on getting it integrated! Here's an image of the Tweaks Tab:\n\n## Profilarr Progress\n\n- Progress is steady, I've been working on it every day since my semester ended. It's taken way, way longer than I've expected (sorry!) but I'm happy with how it's starting to look.\n- Git integration is complete and working, but needs lots of testing.\n- Data modules (custom formats, regex patterns, quality profiles) are complete and fully implement the existing logic from Radarr / Sonarr.\n- I am currently in the progress of porting existing data to the new database (https://github.com/Dictionarry-Hub/database/tree/stable) in the new profilarr standard format. This is going to take a while, as I have to write descriptions, add tags, test cases, etc.\n- Finally, I am starting to work on the compilation engine (https://discord.com/channels/1202375791556431892/1246504849265266738/1272756617041154049) and the import module. Once these things are complete, and I'm confident we won't run into massive bugs, I'll release a beta docker image. ETA? I really don't know, but I'm working as hard as I can.\n\nIf anyone has any tweak ideas (even super specific ones), please let me know and I'll work on getting it integrated! Here's an image of the Tweaks Tab:\n\n![Profile Tweaks](https://i.imgur.com/fzbmJSn.png)",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Profile Tweaks",
"slug": "profile_tweaks",
"author": "santiagosayshey",
"created": "2024-12-3",
"tags": [
"devlog",
"architecture",
"user_choice"
]
},
{
"_id": "Shiny New Stuff",
"content": "hey @everyone, hope you guys are well. Here's another update!\n\n# Motivation\n\nI've been really struggling to work on this project for a few months now - I'll finally get some time at the end of the week but feel completely unmotivated to work on it for more than an hour. Well... after cracking the architecture problem last week and seeing all the support from you guys, I've felt especially motivated to dive back in.\n\n# Profilarr v2 (not really v2 but it sounded cool)\n\nProfilarr is getting some really nice upgrades. Here's an outline of the most important ones:\n\n## It's now a full stack application.\n\nThis means we have a frontend: a site that users can visit to adjust, import, and export regexes, custom formats, and quality profiles. It's built in a way that aims to 'remaster' how it's implemented in Radarr/Sonarr. All the existing functionality is there, but with some really nice quality of life features:\n\n- **Single definition format**: As outlined in the previous dev log, Profilarr's version of this system will use a single definition format. Notably, this allows you to set regex patterns ONCE, then add that regex as a condition inside a custom format.\n- **Sorting and Filtering**: You can now sort and filter items by title, date modified, etc.\n- **Exporting/Importing**: The standard format now allows _everyone_ to import/export regexes, custom formats, and quality profiles freely - no need to query APIs to do this anymore.\n- **Syncing**: Instead of clogging up everyone's arrs with unused custom formats, the sync functionality now only imports _used_ items.\n- **Mass selection**: You can mass select items to import/export/sync/delete.\n- **Tags**: Instead of manual selection, you can set tags on specific custom formats/quality profiles that should be synced. This works similar to how Prowlarr uses tags to selectively sync indexers. Since we are also using the same database for the website, tags can also be used for little tidbits of information too. Like where a release group is an internal at!\n- **Testing**: Developers can now permalink regexes to regex101. This makes it really easy to develop and test simultaneously.\n- **Descriptions**: You can now explain what specific items are for. No need to look it up on the website to see what it does.\n\n## Backend Improvements\n\nThe backend is essentially what Profilarr is right now - a tool to sync some JSON files to your arrs. However, this also has some major improvements:\n\n- **Git integration**: You can select a remote repository to connect to and:\n - Add, commit, and push files; branch off; merge into. This isn't that useful for end users, but I cannot stress enough how much time and suffering this has saved me. Being able to revert regex/custom format/quality profiles to the last commit is my favorite thing I've ever coded.\n - **Branching**: You can have different branches for different things. Of course, this is useful for development, but it also allows you to do things like: separate setups for Radarr/Sonarr/Lidarr. Most importantly, it allows us developers to set stable, dev, and feature branches.\n - **Pulling**: You can now pull in changes from specific branches from a remote repository. You can view differences and decide if you want to pull these changes in. You can set it to be automatic and only alert on merge conflicts (you change something, but an incoming change for that item exists as well). You can choose to get the most stable branch or the latest features merged into develop.\n - **External sources**: You can set your own repo of regexes, custom formats, and quality profiles and share it with whoever you want. As I mentioned in my last dev log, I'll be working on a compiler to convert our standard Profilarr format with the existing arr format. The really cool thing about this is it works both ways. This means the git integration + compiler will allow you to use Profilarr with the trash guides. It'll probably take some tweaking, but I know it's definitely possible now.\n\n## Containerisation\n\nProfilarr will FINALLY be dockerised.\n\n# Development\n\nWith these changes in place, it has massively improved and sped up development. Working in a proprietary tool now allows me the freedom to just implement a feature whenever I want to. Want to filter custom formats with the release tier tag? Boom, implemented. Want to auto-apply scores to custom formats in quality profiles based on tags? Boom, implemented.\n\n## Machine Learning\n\nThis part is mostly speculation and rambling - nothing concrete yet. I really want to incorporate some kind of AI help into Profilarr. A button you can press to auto-generate regex or a custom format. I've read countless Reddit posts of someone unfamiliar with regex/custom formats/profiles asking for help in trying to learn. \"How do I write a custom format that matches x265 releases under size x?\" It's so easily solved using AI.\n\nI want to implement this one day, I just don't have enough knowledge or experience to do it yet. The best I've come up with is something that sends a request to OpenAI's API with a prompt. The results are less than ideal. But just imagine the future where some kind of machine learning tool has access to an entire database of regexes, custom formats, and quality profiles curated by hundreds of people, and can use that knowledge to predict patterns and truly tailor stuff to suit people's needs. Who knows if it ever gets to that point, but that's my vision for Dictionarry.\n\nRamble over, as you can tell I've been feeling pretty motivated lately!\n\nAnyway, here's some images of profilarr v2.\n\n**Regex Page**:\n\n![Regex Page](https://i.imgur.com/kMZ9qII.png)\n\n**Custom Format Page**:\n\n![Custom Format Page](https://i.imgur.com/mCyDxId.png)\n\n**Status Page**:\n\n![Status Page](https://i.imgur.com/ZleeOEF.png)\n\nOf course, everything is still a heavy work in progress.\n\nThat's all for today!",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Shiny New Stuff",
"slug": "shiny_new_stuff",
"author": "santiagosayshey",
"created": "2024-8-19",
"tags": [
"devlog",
"architecture"
]
},
{
"_id": "Vision Almost Realised",
"content": "Hey @everyone, small log for today!\n\n```bash\n$ python profile_compile.py 'profiles/1080p Encode.yml' '1080p Encode (sonarr - master).json' -s\nConverted profile saved to: 1080p Encode (sonarr - master).json\n\n$ python importarr.py\nImporting Quality Profiles to sonarr : Master\nUpdating '1080p Encode' quality profile : SUCCESS\n```\n\nThese two commands are the culmination of the architecture overhaul I talked about in August: https://discord.com/channels/1202375791556431892/1246504849265266738/1272756617041154049. The Profilarr standard format _**works**_. A typical profile is now about 300 lines (down from 1000 each for radarr / sonarr), is able to be compiled from PSF to Radarr OR Sonarr (and back!). Regex patterns allow format resolution, so no more editing the same thing 5, 10... 20 times.\n\nI'm currently in the process of hooking up the database to the new website, and that's looking pretty cool too. I cannot even explain how good it feels to be able to edit a profile once inside Profilarr, push those changes directly from Profilarr, have those changes reflected as incoming changes for end users, and as updated information on the website all in one fell swoop.\n\nIt's taken a huge effort the past 4 months, and I still have to actually connect it to the backend, but I'm fairly happy with how it's turned out. The changes won't be all that evident right away for you guys, but it's going to save me (and anyone who wants to contribute) hours upon hours of development time for everything that I have planned.\n\n## Golden Popcorn Performance Index Changes\n\nThe current GPPi algorithm is strong, but fundamentally flawed. It does not take into consideration release groups who have no data. There are terrific new groups (ZoroSenpai for example) who should be tier ~2 at least, but aren't simply because they have no data. How do we fix this?\n\n### Popularity\n\nFor every encode at a specific resolution for a movie / tv show that is currently _popular_, a release group receives +1 score to their GPPi. At the end of every month, the score is reset, and the previous score is normalized (tbd on how) and added to their permanent GPPi score (up to a certain point and probably never past tier ~3)\n\nThis process will be completely automatic and will hopefully solve the problem of new good release groups.\n\n### Grouping\n\nThe previous 'tiers' for release groups was just natural intuitive grouping. Humans are surprisingly very, very good at pattern recognition so it was never really a problem. However, it was manual, and we dont like manual around here. Enter 'K Means Clustering'. Essentially it's just a fancy algorithm that finds natural break points between groups of numbers. Using K means, I've dropped the number of 1080p Tiers from 7 down to 5 which in turn has increased immutability. Small changes, but will be important in the long run.\n\n## Thank You!\n\nThat's all for today, I hope everyone's doing alright and enjoying the holidays :grinning:",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Vision (Almost) Realised",
"slug": "vision_almost_realised",
"author": "santiagosayshey",
"created": "2024-12-24",
"tags": [
"devlog",
"architecture",
"gppi"
]
},
{
"_id": "Website 2.0",
"content": "Hey everyone, medium-ish update today.\n\n## Website 2.0\n\nI've wanted to transition away from the old site / mkdocs for a while now as its quite hard to maintain and keep everything up to date, so I built a new site using Next.js that uses ISR to rebuild its content using the dictionarry database. Basically this just means:\n\n- Database gets an update -> Website sees its data is stale -> Website rebuilds itself with new data -> Santiago smiles in not needing to do anything\n\nThis all ties into the whole \"write once\" philosophy that I instilled with Profilarr and has made development much easier. There are still quite a few layout issues and perhaps a devlog refactor I need to fit in somewhere, but I'm happy to share it with you guys as it is.\n\n[Website 2.0](https://dictionarry.dev/)\n\n![website2.0](https://i.imgur.com/eORTwml.png)\n\nThe old site will go down soon, sorry if I broke anyone's workflows D:\n\n### Profile Selector?\n\nThis idea has gone through many iterations since i started Dictionarry last year.\n\n1. A static flowchart with not nearly enough information / choice: https://github.com/santiagosayshey/website/blob/030f3631b4f6fffdb7fa9f4696e5d12defc84a46/docs/Profiles/flowchart.png\n2. The \"Profile Selector\" (terrible name): https://selectarr.pages.dev/\n3. Frankenstein's triangle: [Discord Link](https://discord.com/channels/1202375791556431892/1246504849265266738/1246536424925171925)\n\nFrankenstein's triangle was supposed to be what i shipped with the new website (and I actually finished it too!). It worked by calculating the area of the efficiency/quality/compatibility triangle using some formula named after some guy i forget, to guesstimate user choice based on their previous selection. It did this by normalizing the \"score\" of each profile on each of it's axes and finding the best fitting triangle that used the axis that was changed.\n\nResults were pretty good but I felt that it abstracted _too much_ of what made any user choice meaningful so I decided to scrap it.\n\n### Profile Builder!\n\nIn it's place is the \"Profile Builder\" (maybe also a terrible name). It still attempts to abstract audio/video down into more quantifiable groupings, but limits itself to explanations of certain things where more abstraction is detrimental. It's pretty self explanatory once you use it, but basically you choose through increasingly niche groupings -> resolution -> compression -> encode type -> codec -> HDR. At each step, a list of recommended profiles will be shown. I think this new system helps to fix the \"trying to get the profile I want\" issue as it starts pretty broad and gets increasingly more specific the more things you choose. It's up now, give it a playwith; let me know if its good / bad / needs changes: [Profile Buider](https://dictionarry.dev/builder)\n\n![Profile Builder](https://i.imgur.com/ka8KSHl.png)\n\n## Encode Efficiency Index\n\nHere we go, meat and potatoes. This is another release group metric just like the Golden Popcorn Performance Index. Heres's the play-by-play:\n\n- It evaluates release groups on their average compression ratio (how big their encode is compared to a source), to discern quality and/or efficiency.\n- It can discern transparency by targeting ratios at which a codec begins to \"saturate\"\n- It can discern efficiency by targeting ratios at which a codec reaches it's \"efficiency apex\"\n\nThis is a heavily watered down explanation of the metric, you can read about it (with examples), in very heavy detail [here](https://dictionarry.dev/wiki/EEi). Months of research and iteration has gone into this, and I really think this is Dictionarry's biggest asset so far. When AV1 profiles become a thing, this metric is ready for it.\n\n#### No More Parsing Codecs!!!!\n\nIf you parse the efficiency of a release group directly, then you know youre getting something at a file size you want. This means we don't have to use h265 / x265 as a ridiculous proxy baseline to find content we want anymore. We can just downrank all h264 instead which is much more reliable\n\n#### 2160p Quality (Encode) Profile + Release Group Tierlist!!!!!!!!\n\nUsing EEI, we target 4k release groups at 55% target ratio to discern transparency. No golden popcorns needed, no complex trump parsing crap. No \"popular\" vote. Whenever something isn't documented, we simply add that movie / tv show to the data source and groupings update automatically. It's almost like magic.\n\nThis metric has made the 2160p Quality profile possible and i dare say it's the most comprehensive one I've worked on thus far. Give the quality profile and tier lists a read here:\n\n- [216p Quality Profile](https://dictionarry.dev/profiles/2160p-quality)\n- [2160p Quality Release Group Tiers](https://dictionarry.dev/tiers/2160p/quality)\n\n#### Thanks\n\n- Thanks to @seraphys for helping out with the profile creation / giving constant feedback.\n- Thanks to @erphise for being a tester / the catalyst for the creation of this metric. If they hadn't been testing out the HEVC profile, we never would have talked about compression ratios which never meant I got the idea for the metric in the first place.\n\nShow them some love.\n\n## Profilarr\n\nAlmost done, I took a break for a couple weeks to finish up the website but I'm gonna get rolling again soon. I just finalized authentication, database migrations and the pull module. The only major thing left is getting everything ready for production. This means setting up the docker image, unraid template, etc, etc. It's hard to say how long this is gonna take since I'm basically learning it all on the fly so bare with me on this. But, it's almost done and a beta test will be out soon (hopefully)",
"last_modified": "2025-08-20T00:28:41.180792+00:00",
"title": "Website 2.0",
"slug": "website2.0",
"author": "santiagosayshey",
"created": "2025-02-02",
"tags": [
"devlog",
"website",
"profile_builder",
"eei",
"2160p",
"quality"
]
}
]