Compare commits

...

19 Commits

Author SHA1 Message Date
73a1c6a744 nightly: bump 0.1.62-nightly and promote dev genre optimizations 2026-02-24 14:12:22 +01:00
99b67a24f8 dev: show full series info already in title selection 2026-02-24 14:04:47 +01:00
45d447cdb3 dev: load full metadata for currently opened genre page 2026-02-24 14:00:19 +01:00
b9687ea127 dev: split changelog files and use dev changelog for -dev versions 2026-02-24 13:56:40 +01:00
f1f9d8f5d8 dev: include plot text in Serienstream genre list entries 2026-02-24 13:54:33 +01:00
358cfb1967 dev: switch Serienstream genres to strict page-on-demand flow 2026-02-24 13:33:35 +01:00
0d10219ccb dev: add on-demand Serienstream genre paging and minimal list parser 2026-02-24 13:32:12 +01:00
aab7613304 nightly: bump 0.1.61 and fix install/cancel selection flow 2026-02-23 20:59:15 +01:00
896398721c updates: fix install dialog labels and use InstallAddon flow 2026-02-23 20:55:19 +01:00
d1b22da9cd updates: read installed version from addon.xml on disk 2026-02-23 20:52:55 +01:00
305a58c8bd updates: filter versions by channel semver pattern 2026-02-23 20:50:06 +01:00
75a7df8361 updates: apply channel now installs latest version from selected channel 2026-02-23 20:47:18 +01:00
d876d5b84c updates: add version picker with changelog and install/cancel flow 2026-02-23 20:44:33 +01:00
59728875e9 updates: show installed/available versions and apply channel explicitly 2026-02-23 20:42:09 +01:00
db5748e012 docs: add release flow for nightly and main 2026-02-23 20:36:43 +01:00
ef531ea0aa nightly: bump to 0.1.60 and finalize menu, resolver, settings cleanup 2026-02-23 20:21:44 +01:00
7ba24532ad Bump nightly to 0.1.59-nightly and default update channel to nightly 2026-02-23 19:54:40 +01:00
3f799aa170 Unify menu labels, centralize hoster URL normalization, and add auto-update toggle 2026-02-23 19:54:17 +01:00
d5a1125e03 nightly: fix movie search flow and add source metadata fallbacks 2026-02-23 17:52:44 +01:00
14 changed files with 1631 additions and 477 deletions

11
CHANGELOG-DEV.md Normal file
View File

@@ -0,0 +1,11 @@
# Changelog (Dev)
## 0.1.62-dev - 2026-02-24
- Neuer Dev-Stand fuer Genre-Performance (Serienstream).
- Genre-Listen laden strikt nur die angeforderte Seite (on-demand, max. 20 Titel).
- Weitere Seiten werden erst bei `Naechste Seite` geladen.
- Listen-Parser reduziert auf Titel, Serien-URL und Cover.
- Plot wird aus den Karten mit uebernommen und in der Liste angezeigt, falls vorhanden.
- Metadaten werden fuer die jeweils geoeffnete Seite vollstaendig geladen und angezeigt.
- Serien-Infos (inkl. Plot/Art) sind bereits in der Titelauswahl sichtbar, nicht erst in der Staffelansicht.

38
CHANGELOG-NIGHTLY.md Normal file
View File

@@ -0,0 +1,38 @@
# Changelog (Nightly)
## 0.1.62-nightly - 2026-02-24
- Serienstream Genres auf strict on-demand Paging umgestellt:
- Beim Oeffnen eines Genres wird nur Seite 1 geladen (max. 20 Titel).
- Weitere Seiten werden nur bei `Naechste Seite` geladen.
- Listen-Parser fuer Serienstream auf Titel, Serien-URL, Cover und Plot optimiert.
- Serien-Infos (Plot/Art) sind bereits in der Titelauswahl sichtbar.
- Dev-Changelog-Datei eingefuehrt (`CHANGELOG-DEV.md`) fuer `-dev` Builds.
## 0.1.61-nightly - 2026-02-23
- Update-Dialog: feste Auswahl mit `Installieren` / `Abbrechen` (kein vertauschter Yes/No-Dialog mehr).
- Versionen im Update-Dialog nach Kanal gefiltert:
- Main: nur `x.y.z`
- Nightly: nur `x.y.z-nightly`
- Installierte Version wird direkt aus `addon.xml` gelesen.
- Beim Kanalwechsel wird direkt die neueste Version aus dem gewaehlten Kanal installiert.
## 0.1.59-nightly - 2026-02-23
- Enthaelt alle Aenderungen aus `0.1.58`.
- Update-Kanal standardmaessig auf `Nightly`.
- Nightly-Repo-URL als Standard gesetzt.
- Settings-Menue neu sortiert:
- Quellen
- Metadaten
- TMDB Erweitert
- Updates
- Debug Global
- Debug Quellen
- Seitengroesse in Listen auf 20 gesetzt.
- `topstream_genre_max_pages` entfernt.
## Hinweis
- Nightly ist fuer Tests und kann sich kurzfristig aendern.

12
CHANGELOG.md Normal file
View File

@@ -0,0 +1,12 @@
# Changelog (Stable)
## 0.1.58 - 2026-02-23
- Menuebezeichnungen vereinheitlicht (`Haeufig gesehen`, `Neuste Titel`).
- `Neue Titel` und `Neueste Folgen` im Menue zu `Neuste Titel` zusammengelegt.
- Hoster-Header-Anpassung zentral nach `resolve_stream_link` eingebaut.
- Hinweis bei Cloudflare-Block durch ResolveURL statt stiller Fehlversuche.
- Update-Einstellungen erweitert (Kanal, manueller Check, optionaler Auto-Check).
- Metadaten-Parsing in AniWorld und Filmpalast nachgezogen (Cover/Plot robuster).
- Topstreamfilm-Suche: fehlender `urlencode`-Import behoben.
- Einige ungenutzte Funktionen entfernt.

View File

@@ -1,5 +1,5 @@
<?xml version='1.0' encoding='utf-8'?> <?xml version='1.0' encoding='utf-8'?>
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.57" provider-name="ViewIt"> <addon id="plugin.video.viewit" name="ViewIt" version="0.1.62-nightly" provider-name="ViewIt">
<requires> <requires>
<import addon="xbmc.python" version="3.0.0" /> <import addon="xbmc.python" version="3.0.0" />
<import addon="script.module.requests" /> <import addon="script.module.requests" />
@@ -18,4 +18,4 @@
<license>GPL-3.0-or-later</license> <license>GPL-3.0-or-later</license>
<platform>all</platform> <platform>all</platform>
</extension> </extension>
</addon> </addon>

File diff suppressed because it is too large Load Diff

View File

@@ -15,7 +15,9 @@ from __future__ import annotations
from datetime import datetime from datetime import datetime
import hashlib import hashlib
import os import os
import re
from typing import Optional from typing import Optional
from urllib.parse import parse_qsl, urlencode
try: # pragma: no cover - Kodi runtime try: # pragma: no cover - Kodi runtime
import xbmcaddon # type: ignore[import-not-found] import xbmcaddon # type: ignore[import-not-found]
@@ -237,3 +239,40 @@ def dump_response_html(
max_files = get_setting_int(addon_id, max_files_setting_id, default=200) max_files = get_setting_int(addon_id, max_files_setting_id, default=200)
_prune_dump_files(log_dir, prefix=filename_prefix, max_files=max_files) _prune_dump_files(log_dir, prefix=filename_prefix, max_files=max_files)
_append_text_file(path, content) _append_text_file(path, content)
def normalize_resolved_stream_url(final_url: str, *, source_url: str = "") -> str:
"""Normalisiert hoster-spezifische Header im finalen Stream-Link.
`final_url` kann ein Kodi-Header-Suffix enthalten: `url|Key=Value&...`.
Die Funktion passt nur bekannte Problemfaelle an und laesst sonst alles unveraendert.
"""
url = (final_url or "").strip()
if not url:
return ""
normalized = _normalize_supervideo_serversicuro(url, source_url=source_url)
return normalized
def _normalize_supervideo_serversicuro(final_url: str, *, source_url: str = "") -> str:
if "serversicuro.cc/hls/" not in final_url.casefold() or "|" not in final_url:
return final_url
source = (source_url or "").strip()
code_match = re.search(
r"supervideo\.(?:tv|cc)/(?:e/)?([a-z0-9]+)(?:\\.html)?",
source,
flags=re.IGNORECASE,
)
if not code_match:
return final_url
code = (code_match.group(1) or "").strip()
if not code:
return final_url
media_url, header_suffix = final_url.split("|", 1)
headers = dict(parse_qsl(header_suffix, keep_blank_values=True))
headers["Referer"] = f"https://supervideo.cc/e/{code}"
return f"{media_url}|{urlencode(headers)}"

View File

@@ -754,6 +754,7 @@ class AniworldPlugin(BasisPlugin):
def __init__(self) -> None: def __init__(self) -> None:
self._anime_results: Dict[str, SeriesResult] = {} self._anime_results: Dict[str, SeriesResult] = {}
self._title_url_cache: Dict[str, str] = self._load_title_url_cache() self._title_url_cache: Dict[str, str] = self._load_title_url_cache()
self._title_meta: Dict[str, tuple[str, str]] = {}
self._genre_names_cache: Optional[List[str]] = None self._genre_names_cache: Optional[List[str]] = None
self._season_cache: Dict[str, List[SeasonInfo]] = {} self._season_cache: Dict[str, List[SeasonInfo]] = {}
self._season_links_cache: Dict[str, List[SeasonInfo]] = {} self._season_links_cache: Dict[str, List[SeasonInfo]] = {}
@@ -818,8 +819,135 @@ class AniworldPlugin(BasisPlugin):
changed = True changed = True
if changed and persist: if changed and persist:
self._save_title_url_cache() self._save_title_url_cache()
if description:
old_plot, old_poster = self._title_meta.get(title, ("", ""))
self._title_meta[title] = (description.strip() or old_plot, old_poster)
return changed return changed
def _store_title_meta(self, title: str, *, plot: str = "", poster: str = "") -> None:
title = (title or "").strip()
if not title:
return
old_plot, old_poster = self._title_meta.get(title, ("", ""))
merged_plot = (plot or old_plot or "").strip()
merged_poster = (poster or old_poster or "").strip()
self._title_meta[title] = (merged_plot, merged_poster)
@staticmethod
def _is_series_image_url(url: str) -> bool:
value = (url or "").strip().casefold()
if not value:
return False
blocked = (
"/public/img/facebook",
"/public/img/logo",
"aniworld-logo",
"favicon",
"/public/img/german.svg",
"/public/img/japanese-",
)
return not any(marker in value for marker in blocked)
@staticmethod
def _extract_style_url(style_value: str) -> str:
style_value = (style_value or "").strip()
if not style_value:
return ""
match = re.search(r"url\((['\"]?)(.*?)\1\)", style_value, flags=re.IGNORECASE)
if not match:
return ""
return (match.group(2) or "").strip()
def _extract_series_metadata(self, soup: BeautifulSoupT) -> tuple[str, str, str]:
if not soup:
return "", "", ""
plot = ""
poster = ""
fanart = ""
root = soup.select_one("#series") or soup
description_node = root.select_one("p.seri_des")
if description_node is not None:
full_text = (description_node.get("data-full-description") or "").strip()
short_text = (description_node.get_text(" ", strip=True) or "").strip()
plot = full_text or short_text
if not plot:
for selector in ("meta[property='og:description']", "meta[name='description']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
plot = content
break
if not plot:
for selector in (".series-description", ".seri_des", ".description", "article p"):
node = soup.select_one(selector)
if node is None:
continue
text = (node.get_text(" ", strip=True) or "").strip()
if text:
plot = text
break
cover = root.select_one("div.seriesCoverBox img[itemprop='image'], div.seriesCoverBox img")
if cover is not None:
for attr in ("data-src", "src"):
value = (cover.get(attr) or "").strip()
if value:
candidate = _absolute_url(value)
if self._is_series_image_url(candidate):
poster = candidate
break
if not poster:
for selector in ("meta[property='og:image']", "meta[name='twitter:image']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
candidate = _absolute_url(content)
if self._is_series_image_url(candidate):
poster = candidate
break
if not poster:
for selector in ("img.seriesCoverBox", ".seriesCoverBox img"):
image = soup.select_one(selector)
if image is None:
continue
value = (image.get("data-src") or image.get("src") or "").strip()
if value:
candidate = _absolute_url(value)
if self._is_series_image_url(candidate):
poster = candidate
break
backdrop_node = root.select_one("section.title .backdrop, .SeriesSection .backdrop, .backdrop")
if backdrop_node is not None:
raw_style = (backdrop_node.get("style") or "").strip()
style_url = self._extract_style_url(raw_style)
if style_url:
candidate = _absolute_url(style_url)
if self._is_series_image_url(candidate):
fanart = candidate
if not fanart:
for selector in ("meta[property='og:image']",):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
candidate = _absolute_url(content)
if self._is_series_image_url(candidate):
fanart = candidate
break
return plot, poster, fanart
@staticmethod @staticmethod
def _season_links_cache_name(series_url: str) -> str: def _season_links_cache_name(series_url: str) -> str:
digest = hashlib.sha1((series_url or "").encode("utf-8")).hexdigest()[:20] digest = hashlib.sha1((series_url or "").encode("utf-8")).hexdigest()[:20]
@@ -951,6 +1079,43 @@ class AniworldPlugin(BasisPlugin):
return None return None
def metadata_for(self, title: str) -> tuple[dict[str, str], dict[str, str], list[object] | None]:
title = (title or "").strip()
if not title:
return {}, {}, None
info: dict[str, str] = {"title": title}
art: dict[str, str] = {}
cached_plot, cached_poster = self._title_meta.get(title, ("", ""))
if cached_plot:
info["plot"] = cached_plot
if cached_poster:
art = {"thumb": cached_poster, "poster": cached_poster}
if "plot" in info and art:
return info, art, None
series = self._find_series_by_title(title)
if series is None or not series.url:
return info, art, None
if series.description and "plot" not in info:
info["plot"] = series.description
try:
soup = _get_soup(series.url, session=get_requests_session("aniworld", headers=HEADERS))
plot, poster, fanart = self._extract_series_metadata(soup)
except Exception:
plot, poster, fanart = "", "", ""
if plot:
info["plot"] = plot
if poster:
art = {"thumb": poster, "poster": poster}
if fanart:
art["fanart"] = fanart
art["landscape"] = fanart
self._store_title_meta(title, plot=info.get("plot", ""), poster=poster)
return info, art, None
def _ensure_popular(self) -> List[SeriesResult]: def _ensure_popular(self) -> List[SeriesResult]:
if self._popular_cache is not None: if self._popular_cache is not None:
return list(self._popular_cache) return list(self._popular_cache)

View File

@@ -603,15 +603,6 @@ class EinschaltenPlugin(BasisPlugin):
url = urljoin(base + "/", path.lstrip("/")) url = urljoin(base + "/", path.lstrip("/"))
return f"{url}?{urlencode({'query': query})}" return f"{url}?{urlencode({'query': query})}"
def _api_movies_url(self, *, with_genres: int, page: int = 1) -> str:
base = self._get_base_url()
if not base:
return ""
params: Dict[str, str] = {"withGenres": str(int(with_genres))}
if page and int(page) > 1:
params["page"] = str(int(page))
return urljoin(base + "/", "api/movies") + f"?{urlencode(params)}"
def _genre_page_url(self, *, genre_id: int, page: int = 1) -> str: def _genre_page_url(self, *, genre_id: int, page: int = 1) -> str:
"""Genre title pages are rendered server-side and embed the movie list in ng-state. """Genre title pages are rendered server-side and embed the movie list in ng-state.
@@ -771,23 +762,6 @@ class EinschaltenPlugin(BasisPlugin):
except Exception: except Exception:
return [] return []
def _fetch_new_titles_movies(self) -> List[MovieItem]:
# "Neue Filme" lives at `/movies/new` and embeds the list in ng-state (`u: "/api/movies"`).
url = self._new_titles_url()
if not url:
return []
try:
_, body = self._http_get_text(url, timeout=20)
payload = _extract_ng_state_payload(body)
movies = _parse_ng_state_movies(payload)
_log_debug_line(f"parse_ng_state_movies:count={len(movies)}")
if movies:
_log_titles(movies, context="new_titles")
return movies
return []
except Exception:
return []
def _fetch_new_titles_movies_page(self, page: int) -> List[MovieItem]: def _fetch_new_titles_movies_page(self, page: int) -> List[MovieItem]:
page = max(1, int(page or 1)) page = max(1, int(page or 1))
url = self._new_titles_url() url = self._new_titles_url()

View File

@@ -244,6 +244,7 @@ class FilmpalastPlugin(BasisPlugin):
def __init__(self) -> None: def __init__(self) -> None:
self._title_to_url: Dict[str, str] = {} self._title_to_url: Dict[str, str] = {}
self._title_meta: Dict[str, tuple[str, str]] = {}
self._series_entries: Dict[str, Dict[int, Dict[int, EpisodeEntry]]] = {} self._series_entries: Dict[str, Dict[int, Dict[int, EpisodeEntry]]] = {}
self._hoster_cache: Dict[str, Dict[str, str]] = {} self._hoster_cache: Dict[str, Dict[str, str]] = {}
self._genre_to_url: Dict[str, str] = {} self._genre_to_url: Dict[str, str] = {}
@@ -722,6 +723,64 @@ class FilmpalastPlugin(BasisPlugin):
return hit.url return hit.url
return "" return ""
def _store_title_meta(self, title: str, *, plot: str = "", poster: str = "") -> None:
title = (title or "").strip()
if not title:
return
old_plot, old_poster = self._title_meta.get(title, ("", ""))
merged_plot = (plot or old_plot or "").strip()
merged_poster = (poster or old_poster or "").strip()
self._title_meta[title] = (merged_plot, merged_poster)
def _extract_detail_metadata(self, soup: BeautifulSoupT) -> tuple[str, str]:
if not soup:
return "", ""
root = soup.select_one("div#content[role='main']") or soup
detail = root.select_one("article.detail") or root
plot = ""
poster = ""
# Filmpalast Detailseite: bevorzugt den dedizierten Filmhandlung-Block.
plot_node = detail.select_one(
"li[itemtype='http://schema.org/Movie'] span[itemprop='description']"
)
if plot_node is not None:
plot = (plot_node.get_text(" ", strip=True) or "").strip()
if not plot:
hidden_plot = detail.select_one("cite span.hidden")
if hidden_plot is not None:
plot = (hidden_plot.get_text(" ", strip=True) or "").strip()
if not plot:
for selector in ("meta[property='og:description']", "meta[name='description']"):
node = root.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
plot = content
break
# Filmpalast Detailseite: Cover liegt stabil in `img.cover2`.
cover = detail.select_one("img.cover2")
if cover is not None:
value = (cover.get("data-src") or cover.get("src") or "").strip()
if value:
candidate = _absolute_url(value)
lower = candidate.casefold()
if "/themes/" not in lower and "spacer.gif" not in lower and "/files/movies/" in lower:
poster = candidate
if not poster:
thumb_node = detail.select_one("li[itemtype='http://schema.org/Movie'] img[itemprop='image']")
if thumb_node is not None:
value = (thumb_node.get("data-src") or thumb_node.get("src") or "").strip()
if value:
candidate = _absolute_url(value)
lower = candidate.casefold()
if "/themes/" not in lower and "spacer.gif" not in lower and "/files/movies/" in lower:
poster = candidate
return plot, poster
def remember_series_url(self, title: str, series_url: str) -> None: def remember_series_url(self, title: str, series_url: str) -> None:
title = (title or "").strip() title = (title or "").strip()
series_url = (series_url or "").strip() series_url = (series_url or "").strip()
@@ -742,6 +801,52 @@ class FilmpalastPlugin(BasisPlugin):
return _series_hint_value(series_key) return _series_hint_value(series_key)
return "" return ""
def metadata_for(self, title: str) -> tuple[dict[str, str], dict[str, str], list[object] | None]:
title = (title or "").strip()
if not title:
return {}, {}, None
info: dict[str, str] = {"title": title}
art: dict[str, str] = {}
cached_plot, cached_poster = self._title_meta.get(title, ("", ""))
if cached_plot:
info["plot"] = cached_plot
if cached_poster:
art = {"thumb": cached_poster, "poster": cached_poster}
if "plot" in info and art:
return info, art, None
detail_url = self._ensure_title_url(title)
if not detail_url:
series_key = self._series_key_for_title(title) or self._ensure_series_entries_for_title(title)
if series_key:
seasons = self._series_entries.get(series_key, {})
first_entry: Optional[EpisodeEntry] = None
for season_number in sorted(seasons.keys()):
episodes = seasons.get(season_number, {})
for episode_number in sorted(episodes.keys()):
first_entry = episodes.get(episode_number)
if first_entry is not None:
break
if first_entry is not None:
break
detail_url = first_entry.url if first_entry is not None else ""
if not detail_url:
return info, art, None
try:
soup = _get_soup(detail_url, session=get_requests_session("filmpalast", headers=HEADERS))
plot, poster = self._extract_detail_metadata(soup)
except Exception:
plot, poster = "", ""
if plot:
info["plot"] = plot
if poster:
art = {"thumb": poster, "poster": poster}
self._store_title_meta(title, plot=info.get("plot", ""), poster=poster)
return info, art, None
def is_movie(self, title: str) -> bool: def is_movie(self, title: str) -> bool:
title = (title or "").strip() title = (title or "").strip()
if not title: if not title:

View File

@@ -79,6 +79,7 @@ SESSION_CACHE_PREFIX = "viewit.serienstream"
SESSION_CACHE_MAX_TITLE_URLS = 800 SESSION_CACHE_MAX_TITLE_URLS = 800
CATALOG_SEARCH_TTL_SECONDS = 600 CATALOG_SEARCH_TTL_SECONDS = 600
CATALOG_SEARCH_CACHE_KEY = "catalog_index" CATALOG_SEARCH_CACHE_KEY = "catalog_index"
GENRE_LIST_PAGE_SIZE = 20
_CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, []) _CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, [])
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]] ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
@@ -97,6 +98,7 @@ class SeriesResult:
title: str title: str
description: str description: str
url: str url: str
cover: str = ""
@dataclass @dataclass
@@ -669,8 +671,9 @@ def _load_catalog_index_from_cache() -> Optional[List[SeriesResult]]:
title = str(entry[0] or "").strip() title = str(entry[0] or "").strip()
url = str(entry[1] or "").strip() url = str(entry[1] or "").strip()
description = str(entry[2] or "") if len(entry) > 2 else "" description = str(entry[2] or "") if len(entry) > 2 else ""
cover = str(entry[3] or "").strip() if len(entry) > 3 else ""
if title and url: if title and url:
items.append(SeriesResult(title=title, description=description, url=url)) items.append(SeriesResult(title=title, description=description, url=url, cover=cover))
if items: if items:
_CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items)) _CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items))
return items or None return items or None
@@ -685,7 +688,7 @@ def _store_catalog_index_in_cache(items: List[SeriesResult]) -> None:
for entry in items: for entry in items:
if not entry.title or not entry.url: if not entry.title or not entry.url:
continue continue
payload.append([entry.title, entry.url, entry.description]) payload.append([entry.title, entry.url, entry.description, entry.cover])
_session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS) _session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS)
@@ -1096,7 +1099,7 @@ class SerienstreamPlugin(BasisPlugin):
name = "Serienstream" name = "Serienstream"
version = "1.0.0" version = "1.0.0"
POPULAR_GENRE_LABEL = "⭐ Beliebte Serien" POPULAR_GENRE_LABEL = "Haeufig gesehen"
def __init__(self) -> None: def __init__(self) -> None:
self._series_results: Dict[str, SeriesResult] = {} self._series_results: Dict[str, SeriesResult] = {}
@@ -1107,8 +1110,8 @@ class SerienstreamPlugin(BasisPlugin):
self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {} self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {}
self._catalog_cache: Optional[Dict[str, List[SeriesResult]]] = None self._catalog_cache: Optional[Dict[str, List[SeriesResult]]] = None
self._genre_group_cache: Dict[str, Dict[str, List[str]]] = {} self._genre_group_cache: Dict[str, Dict[str, List[str]]] = {}
self._genre_page_titles_cache: Dict[Tuple[str, int], List[str]] = {} self._genre_page_entries_cache: Dict[Tuple[str, int], List[SeriesResult]] = {}
self._genre_page_count_cache: Dict[str, int] = {} self._genre_page_has_more_cache: Dict[Tuple[str, int], bool] = {}
self._popular_cache: Optional[List[SeriesResult]] = None self._popular_cache: Optional[List[SeriesResult]] = None
self._requests_available = REQUESTS_AVAILABLE self._requests_available = REQUESTS_AVAILABLE
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS) self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
@@ -1117,6 +1120,7 @@ class SerienstreamPlugin(BasisPlugin):
self._latest_cache: Dict[int, List[LatestEpisode]] = {} self._latest_cache: Dict[int, List[LatestEpisode]] = {}
self._latest_hoster_cache: Dict[str, List[str]] = {} self._latest_hoster_cache: Dict[str, List[str]] = {}
self._series_metadata_cache: Dict[str, Tuple[Dict[str, str], Dict[str, str]]] = {} self._series_metadata_cache: Dict[str, Tuple[Dict[str, str], Dict[str, str]]] = {}
self._series_metadata_full: set[str] = set()
self.is_available = True self.is_available = True
self.unavailable_reason: Optional[str] = None self.unavailable_reason: Optional[str] = None
if not self._requests_available: # pragma: no cover - optional dependency if not self._requests_available: # pragma: no cover - optional dependency
@@ -1409,49 +1413,165 @@ class SerienstreamPlugin(BasisPlugin):
value = re.sub(r"[^a-z0-9]+", "-", value).strip("-") value = re.sub(r"[^a-z0-9]+", "-", value).strip("-")
return value return value
def _fetch_genre_page_titles(self, genre: str, page: int) -> Tuple[List[str], int]: def _cache_list_metadata(self, title: str, description: str = "", cover: str = "") -> None:
key = self._metadata_cache_key(title)
cached = self._series_metadata_cache.get(key)
info = dict(cached[0]) if cached else {}
art = dict(cached[1]) if cached else {}
info.setdefault("title", title)
description = (description or "").strip()
if description and not info.get("plot"):
info["plot"] = description
cover = _absolute_url((cover or "").strip()) if cover else ""
if cover:
art.setdefault("thumb", cover)
art.setdefault("poster", cover)
self._series_metadata_cache[key] = (info, art)
@staticmethod
def _card_description(anchor: BeautifulSoupT) -> str:
if not anchor:
return ""
candidates: List[str] = []
direct = (anchor.get("data-search") or "").strip()
if direct:
candidates.append(direct)
title_attr = (anchor.get("data-title") or "").strip()
if title_attr:
candidates.append(title_attr)
for selector in ("p", ".description", ".desc", ".text-muted", ".small", ".overview"):
node = anchor.select_one(selector)
if node is None:
continue
text = (node.get_text(" ", strip=True) or "").strip()
if text:
candidates.append(text)
parent = anchor.parent if anchor else None
if parent is not None:
parent_data = (parent.get("data-search") or "").strip()
if parent_data:
candidates.append(parent_data)
parent_text = ""
try:
parent_text = (parent.get_text(" ", strip=True) or "").strip()
except Exception:
parent_text = ""
if parent_text and len(parent_text) > 24:
candidates.append(parent_text)
for value in candidates:
cleaned = re.sub(r"\s+", " ", str(value or "")).strip()
if cleaned and len(cleaned) > 12:
return cleaned
return ""
def _parse_genre_entries_from_soup(self, soup: BeautifulSoupT) -> List[SeriesResult]:
entries: List[SeriesResult] = []
seen_urls: set[str] = set()
def _add_entry(title: str, description: str, href: str, cover: str) -> None:
series_url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
if not series_url or "/serie/" not in series_url:
return
if "/staffel-" in series_url or "/episode-" in series_url:
return
if series_url in seen_urls:
return
title = (title or "").strip()
if not title:
return
description = (description or "").strip()
cover_url = _absolute_url((cover or "").strip()) if cover else ""
seen_urls.add(series_url)
self._remember_series_result(title, series_url, description)
self._cache_list_metadata(title, description=description, cover=cover_url)
entries.append(SeriesResult(title=title, description=description, url=series_url, cover=cover_url))
for anchor in soup.select("a.show-card[href]"):
href = (anchor.get("href") or "").strip()
if not href:
continue
img = anchor.select_one("img")
title = (
(img.get("alt") if img else "")
or (anchor.get("title") or "")
or (anchor.get_text(" ", strip=True) or "")
).strip()
description = self._card_description(anchor)
cover = (img.get("data-src") if img else "") or (img.get("src") if img else "")
_add_entry(title, description, href, cover)
if entries:
return entries
for item in soup.select("li.series-item"):
anchor = item.find("a", href=True)
if not anchor:
continue
href = (anchor.get("href") or "").strip()
title = (anchor.get_text(" ", strip=True) or "").strip()
description = (item.get("data-search") or "").strip()
img = anchor.find("img")
cover = (img.get("data-src") if img else "") or (img.get("src") if img else "")
_add_entry(title, description, href, cover)
return entries
def _fetch_genre_page_entries(self, genre: str, page: int) -> Tuple[List[SeriesResult], bool]:
slug = self._genre_slug(genre) slug = self._genre_slug(genre)
if not slug: if not slug:
return [], 1 return [], False
cache_key = (slug, page) cache_key = (slug, page)
cached = self._genre_page_titles_cache.get(cache_key) cached_entries = self._genre_page_entries_cache.get(cache_key)
cached_pages = self._genre_page_count_cache.get(slug) cached_has_more = self._genre_page_has_more_cache.get(cache_key)
if cached is not None and cached_pages is not None: if cached_entries is not None and cached_has_more is not None:
return list(cached), int(cached_pages) return list(cached_entries), bool(cached_has_more)
url = f"{_get_base_url()}/genre/{slug}" url = f"{_get_base_url()}/genre/{slug}"
if page > 1: if page > 1:
url = f"{url}?page={int(page)}" url = f"{url}?page={int(page)}"
soup = _get_soup_simple(url) soup = _get_soup_simple(url)
titles: List[str] = [] entries = self._parse_genre_entries_from_soup(soup)
seen: set[str] = set()
for anchor in soup.select("a.show-card[href]"): has_more = False
for anchor in soup.select("a[rel='next'][href], a[href*='?page=']"):
href = (anchor.get("href") or "").strip() href = (anchor.get("href") or "").strip()
series_url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/") if not href:
if "/serie/" not in series_url:
continue continue
img = anchor.select_one("img[alt]")
title = ((img.get("alt") if img else "") or "").strip()
if not title:
continue
key = title.casefold()
if key in seen:
continue
seen.add(key)
self._remember_series_result(title, series_url)
titles.append(title)
max_page = 1
for anchor in soup.select("a[href*='?page=']"):
href = (anchor.get("href") or "").strip()
match = re.search(r"[?&]page=(\d+)", href) match = re.search(r"[?&]page=(\d+)", href)
if not match: if not match:
if "next" in href.casefold():
has_more = True
continue continue
try: try:
max_page = max(max_page, int(match.group(1))) if int(match.group(1)) > int(page):
has_more = True
break
except Exception: except Exception:
continue continue
self._genre_page_titles_cache[cache_key] = list(titles) if len(entries) > GENRE_LIST_PAGE_SIZE:
self._genre_page_count_cache[slug] = max_page has_more = True
return list(titles), max_page entries = entries[:GENRE_LIST_PAGE_SIZE]
self._genre_page_entries_cache[cache_key] = list(entries)
self._genre_page_has_more_cache[cache_key] = bool(has_more)
return list(entries), bool(has_more)
def titles_for_genre_page(self, genre: str, page: int) -> List[str]:
genre = (genre or "").strip()
page = max(1, int(page or 1))
entries, _ = self._fetch_genre_page_entries(genre, page)
return [entry.title for entry in entries if entry.title]
def genre_has_more(self, genre: str, page: int) -> bool:
genre = (genre or "").strip()
page = max(1, int(page or 1))
slug = self._genre_slug(genre)
if not slug:
return False
cache_key = (slug, page)
cached = self._genre_page_has_more_cache.get(cache_key)
if cached is not None:
return bool(cached)
_, has_more = self._fetch_genre_page_entries(genre, page)
return bool(has_more)
def titles_for_genre_group_page(self, genre: str, group_code: str, page: int = 1, page_size: int = 10) -> List[str]: def titles_for_genre_group_page(self, genre: str, group_code: str, page: int = 1, page_size: int = 10) -> List[str]:
genre = (genre or "").strip() genre = (genre or "").strip()
@@ -1461,14 +1581,17 @@ class SerienstreamPlugin(BasisPlugin):
needed = page * page_size + 1 needed = page * page_size + 1
matched: List[str] = [] matched: List[str] = []
try: try:
_, max_pages = self._fetch_genre_page_titles(genre, 1) page_index = 1
for page_index in range(1, max_pages + 1): has_more = True
page_titles, _ = self._fetch_genre_page_titles(genre, page_index) while has_more:
for title in page_titles: page_entries, has_more = self._fetch_genre_page_entries(genre, page_index)
for entry in page_entries:
title = entry.title
if self._group_matches(group_code, title): if self._group_matches(group_code, title):
matched.append(title) matched.append(title)
if len(matched) >= needed: if len(matched) >= needed:
break break
page_index += 1
start = (page - 1) * page_size start = (page - 1) * page_size
end = start + page_size end = start + page_size
return list(matched[start:end]) return list(matched[start:end])
@@ -1487,14 +1610,17 @@ class SerienstreamPlugin(BasisPlugin):
needed = page * page_size + 1 needed = page * page_size + 1
count = 0 count = 0
try: try:
_, max_pages = self._fetch_genre_page_titles(genre, 1) page_index = 1
for page_index in range(1, max_pages + 1): has_more = True
page_titles, _ = self._fetch_genre_page_titles(genre, page_index) while has_more:
for title in page_titles: page_entries, has_more = self._fetch_genre_page_entries(genre, page_index)
for entry in page_entries:
title = entry.title
if self._group_matches(group_code, title): if self._group_matches(group_code, title):
count += 1 count += 1
if count >= needed: if count >= needed:
return True return True
page_index += 1
return False return False
except Exception: except Exception:
grouped = self._ensure_genre_group_cache(genre) grouped = self._ensure_genre_group_cache(genre)
@@ -1611,6 +1737,7 @@ class SerienstreamPlugin(BasisPlugin):
cache_key = self._metadata_cache_key(title) cache_key = self._metadata_cache_key(title)
if info_labels or art: if info_labels or art:
self._series_metadata_cache[cache_key] = (info_labels, art) self._series_metadata_cache[cache_key] = (info_labels, art)
self._series_metadata_full.add(cache_key)
base_series_url = _series_root_url(_extract_canonical_url(series_soup, series.url)) base_series_url = _series_root_url(_extract_canonical_url(series_soup, series.url))
season_links = _extract_season_links(series_soup) season_links = _extract_season_links(series_soup)
@@ -1646,7 +1773,7 @@ class SerienstreamPlugin(BasisPlugin):
cache_key = self._metadata_cache_key(title) cache_key = self._metadata_cache_key(title)
cached = self._series_metadata_cache.get(cache_key) cached = self._series_metadata_cache.get(cache_key)
if cached is not None: if cached is not None and cache_key in self._series_metadata_full:
info, art = cached info, art = cached
return dict(info), dict(art), None return dict(info), dict(art), None
@@ -1656,11 +1783,14 @@ class SerienstreamPlugin(BasisPlugin):
self._series_metadata_cache[cache_key] = (dict(info), {}) self._series_metadata_cache[cache_key] = (dict(info), {})
return info, {}, None return info, {}, None
info: Dict[str, str] = {"title": title} info: Dict[str, str] = dict(cached[0]) if cached else {"title": title}
art: Dict[str, str] = {} art: Dict[str, str] = dict(cached[1]) if cached else {}
info.setdefault("title", title)
if series.description: if series.description:
info["plot"] = series.description info.setdefault("plot", series.description)
# Fuer Listenansichten laden wir pro Seite die Detail-Metadaten vollstaendig nach.
loaded_full = False
try: try:
soup = _get_soup(series.url, session=get_requests_session("serienstream", headers=HEADERS)) soup = _get_soup(series.url, session=get_requests_session("serienstream", headers=HEADERS))
parsed_info, parsed_art = _extract_series_metadata(soup) parsed_info, parsed_art = _extract_series_metadata(soup)
@@ -1668,10 +1798,13 @@ class SerienstreamPlugin(BasisPlugin):
info.update(parsed_info) info.update(parsed_info)
if parsed_art: if parsed_art:
art.update(parsed_art) art.update(parsed_art)
loaded_full = True
except Exception: except Exception:
pass pass
self._series_metadata_cache[cache_key] = (dict(info), dict(art)) self._series_metadata_cache[cache_key] = (dict(info), dict(art))
if loaded_full:
self._series_metadata_full.add(cache_key)
return info, art, None return info, art, None
def series_url_for_title(self, title: str) -> str: def series_url_for_title(self, title: str) -> str:
@@ -1742,6 +1875,8 @@ class SerienstreamPlugin(BasisPlugin):
self._season_links_cache.clear() self._season_links_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
self._catalog_cache = None self._catalog_cache = None
self._series_metadata_cache.clear()
self._series_metadata_full.clear()
return [] return []
if not self._requests_available: if not self._requests_available:
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 nicht suchen.") raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 nicht suchen.")
@@ -1755,6 +1890,8 @@ class SerienstreamPlugin(BasisPlugin):
self._season_cache.clear() self._season_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
self._catalog_cache = None self._catalog_cache = None
self._series_metadata_cache.clear()
self._series_metadata_full.clear()
raise RuntimeError(f"Serienstream-Suche fehlgeschlagen: {exc}") from exc raise RuntimeError(f"Serienstream-Suche fehlgeschlagen: {exc}") from exc
self._series_results = {} self._series_results = {}
for result in results: for result in results:

View File

@@ -66,12 +66,9 @@ SETTING_LOG_URLS = "log_urls_topstreamfilm"
SETTING_DUMP_HTML = "dump_html_topstreamfilm" SETTING_DUMP_HTML = "dump_html_topstreamfilm"
SETTING_SHOW_URL_INFO = "show_url_info_topstreamfilm" SETTING_SHOW_URL_INFO = "show_url_info_topstreamfilm"
SETTING_LOG_ERRORS = "log_errors_topstreamfilm" SETTING_LOG_ERRORS = "log_errors_topstreamfilm"
SETTING_GENRE_MAX_PAGES = "topstream_genre_max_pages"
DEFAULT_TIMEOUT = 20 DEFAULT_TIMEOUT = 20
DEFAULT_PREFERRED_HOSTERS = ["supervideo", "dropload", "voe"] DEFAULT_PREFERRED_HOSTERS = ["supervideo", "dropload", "voe"]
MEINECLOUD_HOST = "meinecloud.click" MEINECLOUD_HOST = "meinecloud.click"
DEFAULT_GENRE_MAX_PAGES = 20
HARD_MAX_GENRE_PAGES = 200
HEADERS = { HEADERS = {
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)", "User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
@@ -97,6 +94,7 @@ class SearchHit:
title: str title: str
url: str url: str
description: str = "" description: str = ""
poster: str = ""
def _normalize_search_text(value: str) -> str: def _normalize_search_text(value: str) -> str:
@@ -149,6 +147,7 @@ class TopstreamfilmPlugin(BasisPlugin):
self._season_to_episode_numbers: Dict[tuple[str, str], List[int]] = {} self._season_to_episode_numbers: Dict[tuple[str, str], List[int]] = {}
self._episode_title_by_number: Dict[tuple[str, int, int], str] = {} self._episode_title_by_number: Dict[tuple[str, int, int], str] = {}
self._detail_html_cache: Dict[str, str] = {} self._detail_html_cache: Dict[str, str] = {}
self._title_meta: Dict[str, tuple[str, str]] = {}
self._popular_cache: List[str] | None = None self._popular_cache: List[str] | None = None
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS) self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
self._preferred_hosters: List[str] = list(self._default_preferred_hosters) self._preferred_hosters: List[str] = list(self._default_preferred_hosters)
@@ -345,22 +344,6 @@ class TopstreamfilmPlugin(BasisPlugin):
return urljoin(base if base.endswith("/") else base + "/", href) return urljoin(base if base.endswith("/") else base + "/", href)
return href return href
def _get_setting_bool(self, setting_id: str, *, default: bool = False) -> bool:
return get_setting_bool(ADDON_ID, setting_id, default=default)
def _get_setting_int(self, setting_id: str, *, default: int) -> int:
if xbmcaddon is None:
return default
try:
addon = xbmcaddon.Addon(ADDON_ID)
getter = getattr(addon, "getSettingInt", None)
if callable(getter):
return int(getter(setting_id))
raw = str(addon.getSetting(setting_id) or "").strip()
return int(raw) if raw else default
except Exception:
return default
def _notify_url(self, url: str) -> None: def _notify_url(self, url: str) -> None:
notify_url( notify_url(
ADDON_ID, ADDON_ID,
@@ -429,6 +412,7 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
seen.add(hit.title) seen.add(hit.title)
self._title_to_url[hit.title] = hit.url self._title_to_url[hit.title] = hit.url
self._store_title_meta(hit.title, plot=hit.description, poster=hit.poster)
titles.append(hit.title) titles.append(hit.title)
if titles: if titles:
self._save_title_url_cache() self._save_title_url_cache()
@@ -487,6 +471,69 @@ class TopstreamfilmPlugin(BasisPlugin):
except Exception: except Exception:
return "" return ""
def _pick_image_from_node(self, node: Any) -> str:
if node is None:
return ""
image = node.select_one("img")
if image is None:
return ""
for attr in ("data-src", "src"):
value = (image.get(attr) or "").strip()
if value and "lazy_placeholder" not in value.casefold():
return self._absolute_external_url(value, base=self._get_base_url())
srcset = (image.get("data-srcset") or image.get("srcset") or "").strip()
if srcset:
first = srcset.split(",")[0].strip().split(" ", 1)[0].strip()
if first:
return self._absolute_external_url(first, base=self._get_base_url())
return ""
def _store_title_meta(self, title: str, *, plot: str = "", poster: str = "") -> None:
title = (title or "").strip()
if not title:
return
old_plot, old_poster = self._title_meta.get(title, ("", ""))
merged_plot = (plot or old_plot or "").strip()
merged_poster = (poster or old_poster or "").strip()
self._title_meta[title] = (merged_plot, merged_poster)
def _extract_detail_metadata(self, soup: BeautifulSoupT) -> tuple[str, str]:
if not soup:
return "", ""
plot = ""
poster = ""
for selector in ("meta[property='og:description']", "meta[name='description']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
plot = content
break
if not plot:
candidates: list[str] = []
for paragraph in soup.select("article p, .TPost p, .Description p, .entry-content p"):
text = (paragraph.get_text(" ", strip=True) or "").strip()
if len(text) >= 60:
candidates.append(text)
if candidates:
plot = max(candidates, key=len)
for selector in ("meta[property='og:image']", "meta[name='twitter:image']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
poster = self._absolute_external_url(content, base=self._get_base_url())
break
if not poster:
for selector in ("article", ".TPost", ".entry-content"):
poster = self._pick_image_from_node(soup.select_one(selector))
if poster:
break
return plot, poster
def _clear_stream_index_for_title(self, title: str) -> None: def _clear_stream_index_for_title(self, title: str) -> None:
for key in list(self._season_to_episode_numbers.keys()): for key in list(self._season_to_episode_numbers.keys()):
if key[0] == title: if key[0] == title:
@@ -721,7 +768,17 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
if is_movie_hint: if is_movie_hint:
self._movie_title_hint.add(title) self._movie_title_hint.add(title)
hits.append(SearchHit(title=title, url=self._absolute_url(href), description="")) description_tag = item.select_one(".TPMvCn .Description, .Description, .entry-summary")
description = (description_tag.get_text(" ", strip=True) or "").strip() if description_tag else ""
poster = self._pick_image_from_node(item)
hits.append(
SearchHit(
title=title,
url=self._absolute_url(href),
description=description,
poster=poster,
)
)
return hits return hits
def is_movie(self, title: str) -> bool: def is_movie(self, title: str) -> bool:
@@ -794,6 +851,7 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
seen.add(hit.title) seen.add(hit.title)
self._title_to_url[hit.title] = hit.url self._title_to_url[hit.title] = hit.url
self._store_title_meta(hit.title, plot=hit.description, poster=hit.poster)
titles.append(hit.title) titles.append(hit.title)
if titles: if titles:
self._save_title_url_cache() self._save_title_url_cache()
@@ -905,7 +963,8 @@ class TopstreamfilmPlugin(BasisPlugin):
self._movie_title_hint.add(title) self._movie_title_hint.add(title)
description_tag = item.select_one(".TPMvCn .Description") description_tag = item.select_one(".TPMvCn .Description")
description = description_tag.get_text(" ", strip=True) if description_tag else "" description = description_tag.get_text(" ", strip=True) if description_tag else ""
hit = SearchHit(title=title, url=self._absolute_url(href), description=description) poster = self._pick_image_from_node(item)
hit = SearchHit(title=title, url=self._absolute_url(href), description=description, poster=poster)
if _matches_query(query, title=hit.title, description=hit.description): if _matches_query(query, title=hit.title, description=hit.description):
hits.append(hit) hits.append(hit)
@@ -918,11 +977,41 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
seen.add(hit.title) seen.add(hit.title)
self._title_to_url[hit.title] = hit.url self._title_to_url[hit.title] = hit.url
self._store_title_meta(hit.title, plot=hit.description, poster=hit.poster)
titles.append(hit.title) titles.append(hit.title)
self._save_title_url_cache() self._save_title_url_cache()
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95) _emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles return titles
def metadata_for(self, title: str) -> tuple[dict[str, str], dict[str, str], list[object] | None]:
title = (title or "").strip()
if not title:
return {}, {}, None
info: dict[str, str] = {"title": title}
art: dict[str, str] = {}
cached_plot, cached_poster = self._title_meta.get(title, ("", ""))
if cached_plot:
info["plot"] = cached_plot
if cached_poster:
art = {"thumb": cached_poster, "poster": cached_poster}
if "plot" in info and art:
return info, art, None
soup = self._get_detail_soup(title)
if soup is None:
return info, art, None
plot, poster = self._extract_detail_metadata(soup)
if plot:
info["plot"] = plot
if poster:
art = {"thumb": poster, "poster": poster}
self._store_title_meta(title, plot=plot, poster=poster)
return info, art, None
def genres(self) -> List[str]: def genres(self) -> List[str]:
if not REQUESTS_AVAILABLE or BeautifulSoup is None: if not REQUESTS_AVAILABLE or BeautifulSoup is None:
return [] return []

View File

@@ -8,8 +8,16 @@ from __future__ import annotations
from typing import Optional from typing import Optional
_LAST_RESOLVE_ERROR = ""
def get_last_error() -> str:
return str(_LAST_RESOLVE_ERROR or "")
def resolve(url: str) -> Optional[str]: def resolve(url: str) -> Optional[str]:
global _LAST_RESOLVE_ERROR
_LAST_RESOLVE_ERROR = ""
if not url: if not url:
return None return None
try: try:
@@ -23,12 +31,14 @@ def resolve(url: str) -> Optional[str]:
hmf = hosted(url) hmf = hosted(url)
valid = getattr(hmf, "valid_url", None) valid = getattr(hmf, "valid_url", None)
if callable(valid) and not valid(): if callable(valid) and not valid():
_LAST_RESOLVE_ERROR = "invalid url"
return None return None
resolver = getattr(hmf, "resolve", None) resolver = getattr(hmf, "resolve", None)
if callable(resolver): if callable(resolver):
result = resolver() result = resolver()
return str(result) if result else None return str(result) if result else None
except Exception: except Exception as exc:
_LAST_RESOLVE_ERROR = str(exc or "")
pass pass
try: try:
@@ -36,8 +46,8 @@ def resolve(url: str) -> Optional[str]:
if callable(resolve_fn): if callable(resolve_fn):
result = resolve_fn(url) result = resolve_fn(url)
return str(result) if result else None return str(result) if result else None
except Exception: except Exception as exc:
_LAST_RESOLVE_ERROR = str(exc or "")
return None return None
return None return None

View File

@@ -1,6 +1,66 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<settings> <settings>
<category label="Debug und Logs"> <category label="Quellen">
<setting id="serienstream_base_url" type="text" label="SerienStream Basis-URL" default="https://s.to" />
<setting id="aniworld_base_url" type="text" label="AniWorld Basis-URL" default="https://aniworld.to" />
<setting id="topstream_base_url" type="text" label="TopStream Basis-URL" default="https://topstreamfilm.live" />
<setting id="einschalten_base_url" type="text" label="Einschalten Basis-URL" default="https://einschalten.in" />
<setting id="filmpalast_base_url" type="text" label="Filmpalast Basis-URL" default="https://filmpalast.to" />
<setting id="doku_streams_base_url" type="text" label="Doku-Streams Basis-URL" default="https://doku-streams.com" />
</category>
<category label="Metadaten">
<setting id="serienstream_metadata_source" type="enum" label="SerienStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="aniworld_metadata_source" type="enum" label="AniWorld Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="topstreamfilm_metadata_source" type="enum" label="TopStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="einschalten_metadata_source" type="enum" label="Einschalten Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="filmpalast_metadata_source" type="enum" label="Filmpalast Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="doku_streams_metadata_source" type="enum" label="Doku-Streams Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" />
<setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" />
<setting id="tmdb_show_plot" type="bool" label="TMDB Beschreibung anzeigen" default="true" />
<setting id="tmdb_show_art" type="bool" label="TMDB Poster und Vorschaubild anzeigen" default="true" />
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
<setting id="tmdb_show_rating" type="bool" label="TMDB Bewertung anzeigen" default="true" />
<setting id="tmdb_show_votes" type="bool" label="TMDB Stimmen anzeigen" default="false" />
</category>
<category label="TMDB Erweitert">
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: gleichzeitige Anfragen (1-20)" default="6" />
<setting id="tmdb_show_cast" type="bool" label="TMDB Besetzung anzeigen" default="false" />
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
<setting id="tmdb_genre_metadata" type="bool" label="TMDB Daten in Genre-Listen anzeigen" default="false" />
<setting id="tmdb_log_requests" type="bool" label="TMDB API-Anfragen loggen" default="false" />
<setting id="tmdb_log_responses" type="bool" label="TMDB API-Antworten loggen" default="false" />
</category>
<category label="Updates">
<setting id="update_channel" type="enum" label="Update-Kanal" default="1" values="Main|Nightly|Custom" />
<setting id="apply_update_channel" type="action" label="Update-Kanal jetzt anwenden" action="RunPlugin(plugin://plugin.video.viewit/?action=apply_update_channel)" option="close" />
<setting id="auto_update_enabled" type="bool" label="Automatische Updates (beim Start pruefen)" default="false" />
<setting id="select_update_version" type="action" label="Version waehlen und installieren" action="RunPlugin(plugin://plugin.video.viewit/?action=select_update_version)" option="close" />
<setting id="update_installed_version" type="text" label="Installierte Version" default="-" enable="false" />
<setting id="update_available_selected" type="text" label="Verfuegbar (gewaehlter Kanal)" default="-" enable="false" />
<setting id="update_available_main" type="text" label="Verfuegbar Main" default="-" enable="false" />
<setting id="update_available_nightly" type="text" label="Verfuegbar Nightly" default="-" enable="false" />
<setting id="update_active_channel" type="text" label="Aktiver Kanal" default="-" enable="false" />
<setting id="update_active_repo_url" type="text" label="Aktive Repo URL" default="-" enable="false" />
<setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" />
<setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
<setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="auto_update_last_ts" type="text" label="Auto-Update letzte Pruefung (intern)" default="0" visible="false" />
<setting id="update_version_addon" type="text" label="ViewIT Version" default="-" visible="false" />
<setting id="update_version_serienstream" type="text" label="SerienStream Version" default="-" visible="false" />
<setting id="update_version_aniworld" type="text" label="AniWorld Version" default="-" visible="false" />
<setting id="update_version_einschalten" type="text" label="Einschalten Version" default="-" visible="false" />
<setting id="update_version_topstreamfilm" type="text" label="TopStream Version" default="-" visible="false" />
<setting id="update_version_filmpalast" type="text" label="Filmpalast Version" default="-" visible="false" />
<setting id="update_version_doku_streams" type="text" label="Doku-Streams Version" default="-" visible="false" />
</category>
<category label="Debug Global">
<setting id="debug_log_urls" type="bool" label="URLs mitschreiben (global)" default="false" /> <setting id="debug_log_urls" type="bool" label="URLs mitschreiben (global)" default="false" />
<setting id="debug_dump_html" type="bool" label="HTML speichern (global)" default="false" /> <setting id="debug_dump_html" type="bool" label="HTML speichern (global)" default="false" />
<setting id="debug_show_url_info" type="bool" label="Aktuelle URL anzeigen (global)" default="false" /> <setting id="debug_show_url_info" type="bool" label="Aktuelle URL anzeigen (global)" default="false" />
@@ -8,78 +68,32 @@
<setting id="log_max_mb" type="number" label="URL-Log: maximale Dateigroesse (MB)" default="5" /> <setting id="log_max_mb" type="number" label="URL-Log: maximale Dateigroesse (MB)" default="5" />
<setting id="log_max_files" type="number" label="URL-Log: Anzahl alter Dateien" default="3" /> <setting id="log_max_files" type="number" label="URL-Log: Anzahl alter Dateien" default="3" />
<setting id="dump_max_files" type="number" label="HTML: maximale Dateien pro Plugin" default="200" /> <setting id="dump_max_files" type="number" label="HTML: maximale Dateien pro Plugin" default="200" />
<setting id="log_urls_serienstream" type="bool" label="Serienstream: URLs mitschreiben" default="false" /> </category>
<setting id="dump_html_serienstream" type="bool" label="Serienstream: HTML speichern" default="false" />
<setting id="show_url_info_serienstream" type="bool" label="Serienstream: Aktuelle URL anzeigen" default="false" /> <category label="Debug Quellen">
<setting id="log_errors_serienstream" type="bool" label="Serienstream: Fehler mitschreiben" default="false" /> <setting id="log_urls_serienstream" type="bool" label="SerienStream: URLs mitschreiben" default="false" />
<setting id="log_urls_aniworld" type="bool" label="Aniworld: URLs mitschreiben" default="false" /> <setting id="dump_html_serienstream" type="bool" label="SerienStream: HTML speichern" default="false" />
<setting id="dump_html_aniworld" type="bool" label="Aniworld: HTML speichern" default="false" /> <setting id="show_url_info_serienstream" type="bool" label="SerienStream: Aktuelle URL anzeigen" default="false" />
<setting id="show_url_info_aniworld" type="bool" label="Aniworld: Aktuelle URL anzeigen" default="false" /> <setting id="log_errors_serienstream" type="bool" label="SerienStream: Fehler mitschreiben" default="false" />
<setting id="log_errors_aniworld" type="bool" label="Aniworld: Fehler mitschreiben" default="false" />
<setting id="log_urls_topstreamfilm" type="bool" label="Topstreamfilm: URLs mitschreiben" default="false" /> <setting id="log_urls_aniworld" type="bool" label="AniWorld: URLs mitschreiben" default="false" />
<setting id="dump_html_topstreamfilm" type="bool" label="Topstreamfilm: HTML speichern" default="false" /> <setting id="dump_html_aniworld" type="bool" label="AniWorld: HTML speichern" default="false" />
<setting id="show_url_info_topstreamfilm" type="bool" label="Topstreamfilm: Aktuelle URL anzeigen" default="false" /> <setting id="show_url_info_aniworld" type="bool" label="AniWorld: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_topstreamfilm" type="bool" label="Topstreamfilm: Fehler mitschreiben" default="false" /> <setting id="log_errors_aniworld" type="bool" label="AniWorld: Fehler mitschreiben" default="false" />
<setting id="log_urls_topstreamfilm" type="bool" label="TopStream: URLs mitschreiben" default="false" />
<setting id="dump_html_topstreamfilm" type="bool" label="TopStream: HTML speichern" default="false" />
<setting id="show_url_info_topstreamfilm" type="bool" label="TopStream: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_topstreamfilm" type="bool" label="TopStream: Fehler mitschreiben" default="false" />
<setting id="log_urls_einschalten" type="bool" label="Einschalten: URLs mitschreiben" default="false" /> <setting id="log_urls_einschalten" type="bool" label="Einschalten: URLs mitschreiben" default="false" />
<setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML speichern" default="false" /> <setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML speichern" default="false" />
<setting id="show_url_info_einschalten" type="bool" label="Einschalten: Aktuelle URL anzeigen" default="false" /> <setting id="show_url_info_einschalten" type="bool" label="Einschalten: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler mitschreiben" default="false" /> <setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler mitschreiben" default="false" />
<setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URLs mitschreiben" default="false" /> <setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URLs mitschreiben" default="false" />
<setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML speichern" default="false" /> <setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML speichern" default="false" />
<setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: Aktuelle URL anzeigen" default="false" /> <setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler mitschreiben" default="false" /> <setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler mitschreiben" default="false" />
</category> </category>
<category label="TopStream">
<setting id="topstream_base_url" type="text" label="Basis-URL" default="https://topstreamfilm.live" />
<setting id="topstreamfilm_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="topstream_genre_max_pages" type="number" label="Genres: max. Seiten laden" default="20" />
</category>
<category label="SerienStream">
<setting id="serienstream_base_url" type="text" label="Basis-URL" default="https://s.to" />
<setting id="serienstream_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="AniWorld">
<setting id="aniworld_base_url" type="text" label="Basis-URL" default="https://aniworld.to" />
<setting id="aniworld_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Einschalten">
<setting id="einschalten_base_url" type="text" label="Basis-URL" default="https://einschalten.in" />
<setting id="einschalten_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Filmpalast">
<setting id="filmpalast_base_url" type="text" label="Basis-URL" default="https://filmpalast.to" />
<setting id="filmpalast_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Doku-Streams">
<setting id="doku_streams_base_url" type="text" label="Basis-URL" default="https://doku-streams.com" />
<setting id="doku_streams_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="TMDB">
<setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" />
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
<setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" />
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: gleichzeitige Anfragen (1-20)" default="6" />
<setting id="tmdb_show_plot" type="bool" label="TMDB Beschreibung anzeigen" default="true" />
<setting id="tmdb_show_art" type="bool" label="TMDB Poster und Vorschaubild anzeigen" default="true" />
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
<setting id="tmdb_show_rating" type="bool" label="TMDB Bewertung anzeigen" default="true" />
<setting id="tmdb_show_votes" type="bool" label="TMDB Stimmen anzeigen" default="false" />
<setting id="tmdb_show_cast" type="bool" label="TMDB Besetzung anzeigen" default="false" />
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
<setting id="tmdb_genre_metadata" type="bool" label="TMDB Daten in Genre-Listen anzeigen" default="false" />
<setting id="tmdb_log_requests" type="bool" label="TMDB API-Anfragen loggen" default="false" />
<setting id="tmdb_log_responses" type="bool" label="TMDB API-Antworten loggen" default="false" />
</category>
<category label="Update">
<setting id="update_repo_url" type="text" label="Update-URL (addons.xml)" default="http://127.0.0.1:8080/repo/addons.xml" />
<setting id="run_update_check" type="action" label="Jetzt nach Updates suchen" action="RunPlugin(plugin://plugin.video.viewit/?action=check_updates)" option="close" />
<setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" />
<setting id="update_version_addon" type="text" label="ViewIT Version" default="-" enable="false" />
<setting id="update_version_serienstream" type="text" label="Serienstream Version" default="-" enable="false" />
<setting id="update_version_aniworld" type="text" label="Aniworld Version" default="-" enable="false" />
<setting id="update_version_einschalten" type="text" label="Einschalten Version" default="-" enable="false" />
<setting id="update_version_topstreamfilm" type="text" label="Topstreamfilm Version" default="-" enable="false" />
<setting id="update_version_filmpalast" type="text" label="Filmpalast Version" default="-" enable="false" />
<setting id="update_version_doku_streams" type="text" label="Doku-Streams Version" default="-" enable="false" />
</category>
</settings> </settings>

49
docs/RELEASE.md Normal file
View File

@@ -0,0 +1,49 @@
# Release Flow (Main + Nightly + Dev)
This project uses three release channels:
- `dev`: playground for experiments
- `nightly`: integration and test channel
- `main`: stable channel
## Rules
- Experimental work goes to `dev`.
- Feature work for release goes to `nightly`.
- Promote from `nightly` to `main` with `--squash` only.
- `main` version has no suffix (`0.1.60`).
- `nightly` version uses `-nightly` and is always at least one patch higher than `main` (`0.1.61-nightly`).
- `dev` version uses `-dev` (`0.1.62-dev`).
- Keep changelogs split:
- `CHANGELOG-DEV.md`
- `CHANGELOG-NIGHTLY.md`
- `CHANGELOG.md`
## Nightly publish
1) Finish changes on `nightly`.
2) Bump addon version in `addon/addon.xml` to `X.Y.Z-nightly`.
3) Build and publish nightly repo artifacts.
4) Push `nightly`.
## Promote nightly to main
```bash
git checkout main
git pull origin main
git merge --squash nightly
git commit -m "release: X.Y.Z"
```
Then:
1) Set `addon/addon.xml` version to `X.Y.Z` (without `-nightly`).
2) Build and publish main repo artifacts.
3) Push `main`.
4) Optional tag: `vX.Y.Z`.
## Local ZIPs (separated)
- Dev ZIP output: `dist/local_zips/dev/`
- Main ZIP output: `dist/local_zips/main/`
- Nightly ZIP output: `dist/local_zips/nightly/`