Compare commits

..

32 Commits

Author SHA1 Message Date
73a1c6a744 nightly: bump 0.1.62-nightly and promote dev genre optimizations 2026-02-24 14:12:22 +01:00
99b67a24f8 dev: show full series info already in title selection 2026-02-24 14:04:47 +01:00
45d447cdb3 dev: load full metadata for currently opened genre page 2026-02-24 14:00:19 +01:00
b9687ea127 dev: split changelog files and use dev changelog for -dev versions 2026-02-24 13:56:40 +01:00
f1f9d8f5d8 dev: include plot text in Serienstream genre list entries 2026-02-24 13:54:33 +01:00
358cfb1967 dev: switch Serienstream genres to strict page-on-demand flow 2026-02-24 13:33:35 +01:00
0d10219ccb dev: add on-demand Serienstream genre paging and minimal list parser 2026-02-24 13:32:12 +01:00
aab7613304 nightly: bump 0.1.61 and fix install/cancel selection flow 2026-02-23 20:59:15 +01:00
896398721c updates: fix install dialog labels and use InstallAddon flow 2026-02-23 20:55:19 +01:00
d1b22da9cd updates: read installed version from addon.xml on disk 2026-02-23 20:52:55 +01:00
305a58c8bd updates: filter versions by channel semver pattern 2026-02-23 20:50:06 +01:00
75a7df8361 updates: apply channel now installs latest version from selected channel 2026-02-23 20:47:18 +01:00
d876d5b84c updates: add version picker with changelog and install/cancel flow 2026-02-23 20:44:33 +01:00
59728875e9 updates: show installed/available versions and apply channel explicitly 2026-02-23 20:42:09 +01:00
db5748e012 docs: add release flow for nightly and main 2026-02-23 20:36:43 +01:00
ef531ea0aa nightly: bump to 0.1.60 and finalize menu, resolver, settings cleanup 2026-02-23 20:21:44 +01:00
7ba24532ad Bump nightly to 0.1.59-nightly and default update channel to nightly 2026-02-23 19:54:40 +01:00
3f799aa170 Unify menu labels, centralize hoster URL normalization, and add auto-update toggle 2026-02-23 19:54:17 +01:00
d5a1125e03 nightly: fix movie search flow and add source metadata fallbacks 2026-02-23 17:52:44 +01:00
d414fac022 Nightly: refactor readability, progress callbacks, and resource handling 2026-02-23 16:47:00 +01:00
7a330c9bc0 repo: publish kodi zips in addon-id subfolders 2026-02-19 20:11:59 +01:00
f8d180bcb5 chore: remove tracked __pycache__ and .pyc files 2026-02-19 14:57:41 +01:00
d71adcfac7 ui: make user-visible texts clearer and more human 2026-02-19 14:55:58 +01:00
81750ad148 docs: rewrite README and docs to concise ASCII style 2026-02-19 14:22:24 +01:00
4409f9432c nightly: playback fast-path, windows asyncio fix, v0.1.56 2026-02-19 14:10:09 +01:00
307df97d74 serienstream: source metadata for seasons/episodes 2026-02-08 23:13:24 +01:00
537f0e23e1 nightly: per-plugin metadata source option 2026-02-08 22:33:07 +01:00
ed1f59d3f2 Nightly: fix Einschalten base URL default 2026-02-07 17:40:31 +01:00
a37c45e2ef Nightly: bump version and refresh snapshots 2026-02-07 17:36:33 +01:00
7f5924b850 Nightly: snapshot harness and cache ignore 2026-02-07 17:33:45 +01:00
b370afe167 Nightly: reproducible zips and plugin manifest 2026-02-07 17:28:49 +01:00
09d2fc850d Nightly: deterministic plugin loading and docs refresh 2026-02-07 17:23:29 +01:00
8 changed files with 218 additions and 62 deletions

11
CHANGELOG-DEV.md Normal file
View File

@@ -0,0 +1,11 @@
# Changelog (Dev)
## 0.1.62-dev - 2026-02-24
- Neuer Dev-Stand fuer Genre-Performance (Serienstream).
- Genre-Listen laden strikt nur die angeforderte Seite (on-demand, max. 20 Titel).
- Weitere Seiten werden erst bei `Naechste Seite` geladen.
- Listen-Parser reduziert auf Titel, Serien-URL und Cover.
- Plot wird aus den Karten mit uebernommen und in der Liste angezeigt, falls vorhanden.
- Metadaten werden fuer die jeweils geoeffnete Seite vollstaendig geladen und angezeigt.
- Serien-Infos (inkl. Plot/Art) sind bereits in der Titelauswahl sichtbar, nicht erst in der Staffelansicht.

View File

@@ -1,5 +1,14 @@
# Changelog (Nightly) # Changelog (Nightly)
## 0.1.62-nightly - 2026-02-24
- Serienstream Genres auf strict on-demand Paging umgestellt:
- Beim Oeffnen eines Genres wird nur Seite 1 geladen (max. 20 Titel).
- Weitere Seiten werden nur bei `Naechste Seite` geladen.
- Listen-Parser fuer Serienstream auf Titel, Serien-URL, Cover und Plot optimiert.
- Serien-Infos (Plot/Art) sind bereits in der Titelauswahl sichtbar.
- Dev-Changelog-Datei eingefuehrt (`CHANGELOG-DEV.md`) fuer `-dev` Builds.
## 0.1.61-nightly - 2026-02-23 ## 0.1.61-nightly - 2026-02-23
- Update-Dialog: feste Auswahl mit `Installieren` / `Abbrechen` (kein vertauschter Yes/No-Dialog mehr). - Update-Dialog: feste Auswahl mit `Installieren` / `Abbrechen` (kein vertauschter Yes/No-Dialog mehr).

View File

@@ -1,16 +1,5 @@
# Changelog (Stable) # Changelog (Stable)
## 0.1.61 - 2026-02-23
- Menues und Labels weiter vereinheitlicht (ASCII-only, einheitliche Texte pro Plugin).
- Update-Bereich ueberarbeitet:
- Kanalwechsel mit direkter Installation der neuesten Kanal-Version.
- Version-Auswahl mit Changelog-Anzeige und klarer Installieren/Abbrechen-Auswahl.
- Anzeige der installierten Version direkt aus lokaler `addon.xml`.
- Kanal-spezifischer Versionsfilter (Main nur stable, Nightly nur `-nightly`).
- Resolver-/Playback-Flow vereinheitlicht und Hoster-URL-Normalisierung zentralisiert.
- Settings aufgeraeumt (strukturierte Kategorien, reduzierte Alt-Optionen).
## 0.1.58 - 2026-02-23 ## 0.1.58 - 2026-02-23
- Menuebezeichnungen vereinheitlicht (`Haeufig gesehen`, `Neuste Titel`). - Menuebezeichnungen vereinheitlicht (`Haeufig gesehen`, `Neuste Titel`).

View File

@@ -1,5 +1,5 @@
<?xml version='1.0' encoding='utf-8'?> <?xml version='1.0' encoding='utf-8'?>
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.61" provider-name="ViewIt"> <addon id="plugin.video.viewit" name="ViewIt" version="0.1.62-nightly" provider-name="ViewIt">
<requires> <requires>
<import addon="xbmc.python" version="3.0.0" /> <import addon="xbmc.python" version="3.0.0" />
<import addon="script.module.requests" /> <import addon="script.module.requests" />

View File

@@ -1170,7 +1170,12 @@ def _extract_changelog_section(changelog_text: str, version: str) -> str:
def _fetch_changelog_for_channel(channel: int, version: str) -> str: def _fetch_changelog_for_channel(channel: int, version: str) -> str:
if channel == UPDATE_CHANNEL_MAIN: version_text = str(version or "").strip().casefold()
if version_text.endswith("-dev"):
url = "https://gitea.it-drui.de/viewit/ViewIT/raw/branch/dev/CHANGELOG-DEV.md"
elif version_text.endswith("-nightly"):
url = "https://gitea.it-drui.de/viewit/ViewIT/raw/branch/nightly/CHANGELOG-NIGHTLY.md"
elif channel == UPDATE_CHANNEL_MAIN:
url = "https://gitea.it-drui.de/viewit/ViewIT/raw/branch/main/CHANGELOG.md" url = "https://gitea.it-drui.de/viewit/ViewIT/raw/branch/main/CHANGELOG.md"
else: else:
url = "https://gitea.it-drui.de/viewit/ViewIT/raw/branch/nightly/CHANGELOG-NIGHTLY.md" url = "https://gitea.it-drui.de/viewit/ViewIT/raw/branch/nightly/CHANGELOG-NIGHTLY.md"

View File

@@ -79,6 +79,7 @@ SESSION_CACHE_PREFIX = "viewit.serienstream"
SESSION_CACHE_MAX_TITLE_URLS = 800 SESSION_CACHE_MAX_TITLE_URLS = 800
CATALOG_SEARCH_TTL_SECONDS = 600 CATALOG_SEARCH_TTL_SECONDS = 600
CATALOG_SEARCH_CACHE_KEY = "catalog_index" CATALOG_SEARCH_CACHE_KEY = "catalog_index"
GENRE_LIST_PAGE_SIZE = 20
_CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, []) _CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, [])
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]] ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
@@ -97,6 +98,7 @@ class SeriesResult:
title: str title: str
description: str description: str
url: str url: str
cover: str = ""
@dataclass @dataclass
@@ -669,8 +671,9 @@ def _load_catalog_index_from_cache() -> Optional[List[SeriesResult]]:
title = str(entry[0] or "").strip() title = str(entry[0] or "").strip()
url = str(entry[1] or "").strip() url = str(entry[1] or "").strip()
description = str(entry[2] or "") if len(entry) > 2 else "" description = str(entry[2] or "") if len(entry) > 2 else ""
cover = str(entry[3] or "").strip() if len(entry) > 3 else ""
if title and url: if title and url:
items.append(SeriesResult(title=title, description=description, url=url)) items.append(SeriesResult(title=title, description=description, url=url, cover=cover))
if items: if items:
_CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items)) _CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items))
return items or None return items or None
@@ -685,7 +688,7 @@ def _store_catalog_index_in_cache(items: List[SeriesResult]) -> None:
for entry in items: for entry in items:
if not entry.title or not entry.url: if not entry.title or not entry.url:
continue continue
payload.append([entry.title, entry.url, entry.description]) payload.append([entry.title, entry.url, entry.description, entry.cover])
_session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS) _session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS)
@@ -1107,8 +1110,8 @@ class SerienstreamPlugin(BasisPlugin):
self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {} self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {}
self._catalog_cache: Optional[Dict[str, List[SeriesResult]]] = None self._catalog_cache: Optional[Dict[str, List[SeriesResult]]] = None
self._genre_group_cache: Dict[str, Dict[str, List[str]]] = {} self._genre_group_cache: Dict[str, Dict[str, List[str]]] = {}
self._genre_page_titles_cache: Dict[Tuple[str, int], List[str]] = {} self._genre_page_entries_cache: Dict[Tuple[str, int], List[SeriesResult]] = {}
self._genre_page_count_cache: Dict[str, int] = {} self._genre_page_has_more_cache: Dict[Tuple[str, int], bool] = {}
self._popular_cache: Optional[List[SeriesResult]] = None self._popular_cache: Optional[List[SeriesResult]] = None
self._requests_available = REQUESTS_AVAILABLE self._requests_available = REQUESTS_AVAILABLE
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS) self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
@@ -1117,6 +1120,7 @@ class SerienstreamPlugin(BasisPlugin):
self._latest_cache: Dict[int, List[LatestEpisode]] = {} self._latest_cache: Dict[int, List[LatestEpisode]] = {}
self._latest_hoster_cache: Dict[str, List[str]] = {} self._latest_hoster_cache: Dict[str, List[str]] = {}
self._series_metadata_cache: Dict[str, Tuple[Dict[str, str], Dict[str, str]]] = {} self._series_metadata_cache: Dict[str, Tuple[Dict[str, str], Dict[str, str]]] = {}
self._series_metadata_full: set[str] = set()
self.is_available = True self.is_available = True
self.unavailable_reason: Optional[str] = None self.unavailable_reason: Optional[str] = None
if not self._requests_available: # pragma: no cover - optional dependency if not self._requests_available: # pragma: no cover - optional dependency
@@ -1409,49 +1413,165 @@ class SerienstreamPlugin(BasisPlugin):
value = re.sub(r"[^a-z0-9]+", "-", value).strip("-") value = re.sub(r"[^a-z0-9]+", "-", value).strip("-")
return value return value
def _fetch_genre_page_titles(self, genre: str, page: int) -> Tuple[List[str], int]: def _cache_list_metadata(self, title: str, description: str = "", cover: str = "") -> None:
key = self._metadata_cache_key(title)
cached = self._series_metadata_cache.get(key)
info = dict(cached[0]) if cached else {}
art = dict(cached[1]) if cached else {}
info.setdefault("title", title)
description = (description or "").strip()
if description and not info.get("plot"):
info["plot"] = description
cover = _absolute_url((cover or "").strip()) if cover else ""
if cover:
art.setdefault("thumb", cover)
art.setdefault("poster", cover)
self._series_metadata_cache[key] = (info, art)
@staticmethod
def _card_description(anchor: BeautifulSoupT) -> str:
if not anchor:
return ""
candidates: List[str] = []
direct = (anchor.get("data-search") or "").strip()
if direct:
candidates.append(direct)
title_attr = (anchor.get("data-title") or "").strip()
if title_attr:
candidates.append(title_attr)
for selector in ("p", ".description", ".desc", ".text-muted", ".small", ".overview"):
node = anchor.select_one(selector)
if node is None:
continue
text = (node.get_text(" ", strip=True) or "").strip()
if text:
candidates.append(text)
parent = anchor.parent if anchor else None
if parent is not None:
parent_data = (parent.get("data-search") or "").strip()
if parent_data:
candidates.append(parent_data)
parent_text = ""
try:
parent_text = (parent.get_text(" ", strip=True) or "").strip()
except Exception:
parent_text = ""
if parent_text and len(parent_text) > 24:
candidates.append(parent_text)
for value in candidates:
cleaned = re.sub(r"\s+", " ", str(value or "")).strip()
if cleaned and len(cleaned) > 12:
return cleaned
return ""
def _parse_genre_entries_from_soup(self, soup: BeautifulSoupT) -> List[SeriesResult]:
entries: List[SeriesResult] = []
seen_urls: set[str] = set()
def _add_entry(title: str, description: str, href: str, cover: str) -> None:
series_url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
if not series_url or "/serie/" not in series_url:
return
if "/staffel-" in series_url or "/episode-" in series_url:
return
if series_url in seen_urls:
return
title = (title or "").strip()
if not title:
return
description = (description or "").strip()
cover_url = _absolute_url((cover or "").strip()) if cover else ""
seen_urls.add(series_url)
self._remember_series_result(title, series_url, description)
self._cache_list_metadata(title, description=description, cover=cover_url)
entries.append(SeriesResult(title=title, description=description, url=series_url, cover=cover_url))
for anchor in soup.select("a.show-card[href]"):
href = (anchor.get("href") or "").strip()
if not href:
continue
img = anchor.select_one("img")
title = (
(img.get("alt") if img else "")
or (anchor.get("title") or "")
or (anchor.get_text(" ", strip=True) or "")
).strip()
description = self._card_description(anchor)
cover = (img.get("data-src") if img else "") or (img.get("src") if img else "")
_add_entry(title, description, href, cover)
if entries:
return entries
for item in soup.select("li.series-item"):
anchor = item.find("a", href=True)
if not anchor:
continue
href = (anchor.get("href") or "").strip()
title = (anchor.get_text(" ", strip=True) or "").strip()
description = (item.get("data-search") or "").strip()
img = anchor.find("img")
cover = (img.get("data-src") if img else "") or (img.get("src") if img else "")
_add_entry(title, description, href, cover)
return entries
def _fetch_genre_page_entries(self, genre: str, page: int) -> Tuple[List[SeriesResult], bool]:
slug = self._genre_slug(genre) slug = self._genre_slug(genre)
if not slug: if not slug:
return [], 1 return [], False
cache_key = (slug, page) cache_key = (slug, page)
cached = self._genre_page_titles_cache.get(cache_key) cached_entries = self._genre_page_entries_cache.get(cache_key)
cached_pages = self._genre_page_count_cache.get(slug) cached_has_more = self._genre_page_has_more_cache.get(cache_key)
if cached is not None and cached_pages is not None: if cached_entries is not None and cached_has_more is not None:
return list(cached), int(cached_pages) return list(cached_entries), bool(cached_has_more)
url = f"{_get_base_url()}/genre/{slug}" url = f"{_get_base_url()}/genre/{slug}"
if page > 1: if page > 1:
url = f"{url}?page={int(page)}" url = f"{url}?page={int(page)}"
soup = _get_soup_simple(url) soup = _get_soup_simple(url)
titles: List[str] = [] entries = self._parse_genre_entries_from_soup(soup)
seen: set[str] = set()
for anchor in soup.select("a.show-card[href]"): has_more = False
for anchor in soup.select("a[rel='next'][href], a[href*='?page=']"):
href = (anchor.get("href") or "").strip() href = (anchor.get("href") or "").strip()
series_url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/") if not href:
if "/serie/" not in series_url:
continue continue
img = anchor.select_one("img[alt]")
title = ((img.get("alt") if img else "") or "").strip()
if not title:
continue
key = title.casefold()
if key in seen:
continue
seen.add(key)
self._remember_series_result(title, series_url)
titles.append(title)
max_page = 1
for anchor in soup.select("a[href*='?page=']"):
href = (anchor.get("href") or "").strip()
match = re.search(r"[?&]page=(\d+)", href) match = re.search(r"[?&]page=(\d+)", href)
if not match: if not match:
if "next" in href.casefold():
has_more = True
continue continue
try: try:
max_page = max(max_page, int(match.group(1))) if int(match.group(1)) > int(page):
has_more = True
break
except Exception: except Exception:
continue continue
self._genre_page_titles_cache[cache_key] = list(titles) if len(entries) > GENRE_LIST_PAGE_SIZE:
self._genre_page_count_cache[slug] = max_page has_more = True
return list(titles), max_page entries = entries[:GENRE_LIST_PAGE_SIZE]
self._genre_page_entries_cache[cache_key] = list(entries)
self._genre_page_has_more_cache[cache_key] = bool(has_more)
return list(entries), bool(has_more)
def titles_for_genre_page(self, genre: str, page: int) -> List[str]:
genre = (genre or "").strip()
page = max(1, int(page or 1))
entries, _ = self._fetch_genre_page_entries(genre, page)
return [entry.title for entry in entries if entry.title]
def genre_has_more(self, genre: str, page: int) -> bool:
genre = (genre or "").strip()
page = max(1, int(page or 1))
slug = self._genre_slug(genre)
if not slug:
return False
cache_key = (slug, page)
cached = self._genre_page_has_more_cache.get(cache_key)
if cached is not None:
return bool(cached)
_, has_more = self._fetch_genre_page_entries(genre, page)
return bool(has_more)
def titles_for_genre_group_page(self, genre: str, group_code: str, page: int = 1, page_size: int = 10) -> List[str]: def titles_for_genre_group_page(self, genre: str, group_code: str, page: int = 1, page_size: int = 10) -> List[str]:
genre = (genre or "").strip() genre = (genre or "").strip()
@@ -1461,14 +1581,17 @@ class SerienstreamPlugin(BasisPlugin):
needed = page * page_size + 1 needed = page * page_size + 1
matched: List[str] = [] matched: List[str] = []
try: try:
_, max_pages = self._fetch_genre_page_titles(genre, 1) page_index = 1
for page_index in range(1, max_pages + 1): has_more = True
page_titles, _ = self._fetch_genre_page_titles(genre, page_index) while has_more:
for title in page_titles: page_entries, has_more = self._fetch_genre_page_entries(genre, page_index)
for entry in page_entries:
title = entry.title
if self._group_matches(group_code, title): if self._group_matches(group_code, title):
matched.append(title) matched.append(title)
if len(matched) >= needed: if len(matched) >= needed:
break break
page_index += 1
start = (page - 1) * page_size start = (page - 1) * page_size
end = start + page_size end = start + page_size
return list(matched[start:end]) return list(matched[start:end])
@@ -1487,14 +1610,17 @@ class SerienstreamPlugin(BasisPlugin):
needed = page * page_size + 1 needed = page * page_size + 1
count = 0 count = 0
try: try:
_, max_pages = self._fetch_genre_page_titles(genre, 1) page_index = 1
for page_index in range(1, max_pages + 1): has_more = True
page_titles, _ = self._fetch_genre_page_titles(genre, page_index) while has_more:
for title in page_titles: page_entries, has_more = self._fetch_genre_page_entries(genre, page_index)
for entry in page_entries:
title = entry.title
if self._group_matches(group_code, title): if self._group_matches(group_code, title):
count += 1 count += 1
if count >= needed: if count >= needed:
return True return True
page_index += 1
return False return False
except Exception: except Exception:
grouped = self._ensure_genre_group_cache(genre) grouped = self._ensure_genre_group_cache(genre)
@@ -1611,6 +1737,7 @@ class SerienstreamPlugin(BasisPlugin):
cache_key = self._metadata_cache_key(title) cache_key = self._metadata_cache_key(title)
if info_labels or art: if info_labels or art:
self._series_metadata_cache[cache_key] = (info_labels, art) self._series_metadata_cache[cache_key] = (info_labels, art)
self._series_metadata_full.add(cache_key)
base_series_url = _series_root_url(_extract_canonical_url(series_soup, series.url)) base_series_url = _series_root_url(_extract_canonical_url(series_soup, series.url))
season_links = _extract_season_links(series_soup) season_links = _extract_season_links(series_soup)
@@ -1646,7 +1773,7 @@ class SerienstreamPlugin(BasisPlugin):
cache_key = self._metadata_cache_key(title) cache_key = self._metadata_cache_key(title)
cached = self._series_metadata_cache.get(cache_key) cached = self._series_metadata_cache.get(cache_key)
if cached is not None: if cached is not None and cache_key in self._series_metadata_full:
info, art = cached info, art = cached
return dict(info), dict(art), None return dict(info), dict(art), None
@@ -1656,11 +1783,14 @@ class SerienstreamPlugin(BasisPlugin):
self._series_metadata_cache[cache_key] = (dict(info), {}) self._series_metadata_cache[cache_key] = (dict(info), {})
return info, {}, None return info, {}, None
info: Dict[str, str] = {"title": title} info: Dict[str, str] = dict(cached[0]) if cached else {"title": title}
art: Dict[str, str] = {} art: Dict[str, str] = dict(cached[1]) if cached else {}
info.setdefault("title", title)
if series.description: if series.description:
info["plot"] = series.description info.setdefault("plot", series.description)
# Fuer Listenansichten laden wir pro Seite die Detail-Metadaten vollstaendig nach.
loaded_full = False
try: try:
soup = _get_soup(series.url, session=get_requests_session("serienstream", headers=HEADERS)) soup = _get_soup(series.url, session=get_requests_session("serienstream", headers=HEADERS))
parsed_info, parsed_art = _extract_series_metadata(soup) parsed_info, parsed_art = _extract_series_metadata(soup)
@@ -1668,10 +1798,13 @@ class SerienstreamPlugin(BasisPlugin):
info.update(parsed_info) info.update(parsed_info)
if parsed_art: if parsed_art:
art.update(parsed_art) art.update(parsed_art)
loaded_full = True
except Exception: except Exception:
pass pass
self._series_metadata_cache[cache_key] = (dict(info), dict(art)) self._series_metadata_cache[cache_key] = (dict(info), dict(art))
if loaded_full:
self._series_metadata_full.add(cache_key)
return info, art, None return info, art, None
def series_url_for_title(self, title: str) -> str: def series_url_for_title(self, title: str) -> str:
@@ -1742,6 +1875,8 @@ class SerienstreamPlugin(BasisPlugin):
self._season_links_cache.clear() self._season_links_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
self._catalog_cache = None self._catalog_cache = None
self._series_metadata_cache.clear()
self._series_metadata_full.clear()
return [] return []
if not self._requests_available: if not self._requests_available:
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 nicht suchen.") raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 nicht suchen.")
@@ -1755,6 +1890,8 @@ class SerienstreamPlugin(BasisPlugin):
self._season_cache.clear() self._season_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
self._catalog_cache = None self._catalog_cache = None
self._series_metadata_cache.clear()
self._series_metadata_full.clear()
raise RuntimeError(f"Serienstream-Suche fehlgeschlagen: {exc}") from exc raise RuntimeError(f"Serienstream-Suche fehlgeschlagen: {exc}") from exc
self._series_results = {} self._series_results = {}
for result in results: for result in results:

View File

@@ -36,7 +36,7 @@
</category> </category>
<category label="Updates"> <category label="Updates">
<setting id="update_channel" type="enum" label="Update-Kanal" default="0" values="Main|Nightly|Custom" /> <setting id="update_channel" type="enum" label="Update-Kanal" default="1" values="Main|Nightly|Custom" />
<setting id="apply_update_channel" type="action" label="Update-Kanal jetzt anwenden" action="RunPlugin(plugin://plugin.video.viewit/?action=apply_update_channel)" option="close" /> <setting id="apply_update_channel" type="action" label="Update-Kanal jetzt anwenden" action="RunPlugin(plugin://plugin.video.viewit/?action=apply_update_channel)" option="close" />
<setting id="auto_update_enabled" type="bool" label="Automatische Updates (beim Start pruefen)" default="false" /> <setting id="auto_update_enabled" type="bool" label="Automatische Updates (beim Start pruefen)" default="false" />
<setting id="select_update_version" type="action" label="Version waehlen und installieren" action="RunPlugin(plugin://plugin.video.viewit/?action=select_update_version)" option="close" /> <setting id="select_update_version" type="action" label="Version waehlen und installieren" action="RunPlugin(plugin://plugin.video.viewit/?action=select_update_version)" option="close" />
@@ -49,7 +49,7 @@
<setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" /> <setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" />
<setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" /> <setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
<setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" /> <setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" /> <setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="auto_update_last_ts" type="text" label="Auto-Update letzte Pruefung (intern)" default="0" visible="false" /> <setting id="auto_update_last_ts" type="text" label="Auto-Update letzte Pruefung (intern)" default="0" visible="false" />
<setting id="update_version_addon" type="text" label="ViewIT Version" default="-" visible="false" /> <setting id="update_version_addon" type="text" label="ViewIT Version" default="-" visible="false" />
<setting id="update_version_serienstream" type="text" label="SerienStream Version" default="-" visible="false" /> <setting id="update_version_serienstream" type="text" label="SerienStream Version" default="-" visible="false" />

View File

@@ -1,17 +1,21 @@
# Release Flow (Main + Nightly) # Release Flow (Main + Nightly + Dev)
This project uses two release channels: This project uses three release channels:
- `dev`: playground for experiments
- `nightly`: integration and test channel - `nightly`: integration and test channel
- `main`: stable channel - `main`: stable channel
## Rules ## Rules
- Feature work goes to `nightly` only. - Experimental work goes to `dev`.
- Feature work for release goes to `nightly`.
- Promote from `nightly` to `main` with `--squash` only. - Promote from `nightly` to `main` with `--squash` only.
- `main` version has no suffix (`0.1.60`). - `main` version has no suffix (`0.1.60`).
- `nightly` version uses `-nightly` and is always at least one patch higher than `main` (`0.1.61-nightly`). - `nightly` version uses `-nightly` and is always at least one patch higher than `main` (`0.1.61-nightly`).
- `dev` version uses `-dev` (`0.1.62-dev`).
- Keep changelogs split: - Keep changelogs split:
- `CHANGELOG-DEV.md`
- `CHANGELOG-NIGHTLY.md` - `CHANGELOG-NIGHTLY.md`
- `CHANGELOG.md` - `CHANGELOG.md`
@@ -40,5 +44,6 @@ Then:
## Local ZIPs (separated) ## Local ZIPs (separated)
- Dev ZIP output: `dist/local_zips/dev/`
- Main ZIP output: `dist/local_zips/main/` - Main ZIP output: `dist/local_zips/main/`
- Nightly ZIP output: `dist/local_zips/nightly/` - Nightly ZIP output: `dist/local_zips/nightly/`