Compare commits

..

8 Commits

33 changed files with 1162 additions and 2302 deletions

View File

@@ -1,29 +0,0 @@
# Changelog (Nightly)
## 0.1.61-nightly - 2026-02-23
- Update-Dialog: feste Auswahl mit `Installieren` / `Abbrechen` (kein vertauschter Yes/No-Dialog mehr).
- Versionen im Update-Dialog nach Kanal gefiltert:
- Main: nur `x.y.z`
- Nightly: nur `x.y.z-nightly`
- Installierte Version wird direkt aus `addon.xml` gelesen.
- Beim Kanalwechsel wird direkt die neueste Version aus dem gewaehlten Kanal installiert.
## 0.1.59-nightly - 2026-02-23
- Enthaelt alle Aenderungen aus `0.1.58`.
- Update-Kanal standardmaessig auf `Nightly`.
- Nightly-Repo-URL als Standard gesetzt.
- Settings-Menue neu sortiert:
- Quellen
- Metadaten
- TMDB Erweitert
- Updates
- Debug Global
- Debug Quellen
- Seitengroesse in Listen auf 20 gesetzt.
- `topstream_genre_max_pages` entfernt.
## Hinweis
- Nightly ist fuer Tests und kann sich kurzfristig aendern.

View File

@@ -1,23 +0,0 @@
# Changelog (Stable)
## 0.1.61 - 2026-02-23
- Menues und Labels weiter vereinheitlicht (ASCII-only, einheitliche Texte pro Plugin).
- Update-Bereich ueberarbeitet:
- Kanalwechsel mit direkter Installation der neuesten Kanal-Version.
- Version-Auswahl mit Changelog-Anzeige und klarer Installieren/Abbrechen-Auswahl.
- Anzeige der installierten Version direkt aus lokaler `addon.xml`.
- Kanal-spezifischer Versionsfilter (Main nur stable, Nightly nur `-nightly`).
- Resolver-/Playback-Flow vereinheitlicht und Hoster-URL-Normalisierung zentralisiert.
- Settings aufgeraeumt (strukturierte Kategorien, reduzierte Alt-Optionen).
## 0.1.58 - 2026-02-23
- Menuebezeichnungen vereinheitlicht (`Haeufig gesehen`, `Neuste Titel`).
- `Neue Titel` und `Neueste Folgen` im Menue zu `Neuste Titel` zusammengelegt.
- Hoster-Header-Anpassung zentral nach `resolve_stream_link` eingebaut.
- Hinweis bei Cloudflare-Block durch ResolveURL statt stiller Fehlversuche.
- Update-Einstellungen erweitert (Kanal, manueller Check, optionaler Auto-Check).
- Metadaten-Parsing in AniWorld und Filmpalast nachgezogen (Cover/Plot robuster).
- Topstreamfilm-Suche: fehlender `urlencode`-Import behoben.
- Einige ungenutzte Funktionen entfernt.

View File

@@ -2,37 +2,41 @@
<img src="addon/resources/logo.png" alt="ViewIT Logo" width="220" /> <img src="addon/resources/logo.png" alt="ViewIT Logo" width="220" />
ViewIT ist ein Kodi Addon. ViewIT ist ein KodiAddon zum Durchsuchen und Abspielen von Inhalten der unterstützten Anbieter.
Es durchsucht Provider und startet Streams.
## Projektstruktur ## Projektstruktur
- `addon/` Kodi Addon Quellcode - `addon/` KodiAddon Quellcode
- `scripts/` Build Scripts - `scripts/` BuildScripts (arbeiten mit `addon/` + `dist/`)
- `dist/` Build Ausgaben - `dist/` BuildAusgaben (ZIPs)
- `docs/` Doku - `docs/`, `tests/`
- `tests/` Tests
## Build und Release ## Build & Release
- Addon Ordner bauen: `./scripts/build_install_addon.sh` - AddonOrdner bauen: `./scripts/build_install_addon.sh``dist/<addon_id>/`
- Kodi ZIP bauen: `./scripts/build_kodi_zip.sh` - KodiZIP bauen: `./scripts/build_kodi_zip.sh``dist/<addon_id>-<version>.zip`
- Version pflegen: `addon/addon.xml` - AddonVersion in `addon/addon.xml`
- Reproduzierbares ZIP: `SOURCE_DATE_EPOCH` optional setzen - Reproduzierbare ZIPs: optional `SOURCE_DATE_EPOCH` setzen
## Lokales Kodi Repository ## Lokales Kodi-Repository
- Repository bauen: `./scripts/build_local_kodi_repo.sh` - Repository bauen (inkl. ZIPs + `addons.xml` + `addons.xml.md5`): `./scripts/build_local_kodi_repo.sh`
- Repository starten: `./scripts/serve_local_kodi_repo.sh` - Lokal bereitstellen: `./scripts/serve_local_kodi_repo.sh`
- Standard URL: `http://127.0.0.1:8080/repo/addons.xml` - Standard-URL: `http://127.0.0.1:8080/repo/addons.xml`
- Eigene URL beim Build: `REPO_BASE_URL=http://<host>:<port>/repo ./scripts/build_local_kodi_repo.sh` - Optional eigene URL beim Build setzen: `REPO_BASE_URL=http://<host>:<port>/repo ./scripts/build_local_kodi_repo.sh`
## Entwicklung ## Gitea Release-Asset Upload
- Router: `addon/default.py` - ZIP bauen: `./scripts/build_kodi_zip.sh`
- Token setzen: `export GITEA_TOKEN=<token>`
- Asset an Tag hochladen (erstellt Release bei Bedarf): `./scripts/publish_gitea_release.sh`
- Optional: `--tag v0.1.50 --asset dist/plugin.video.viewit-0.1.50.zip`
## Entwicklung (kurz)
- Hauptlogik: `addon/default.py`
- Plugins: `addon/plugins/*_plugin.py` - Plugins: `addon/plugins/*_plugin.py`
- Settings: `addon/resources/settings.xml` - Einstellungen: `addon/resources/settings.xml`
## Tests ## Tests mit Abdeckung
- Dev Pakete installieren: `./.venv/bin/pip install -r requirements-dev.txt` - Dev-Abhängigkeiten installieren: `./.venv/bin/pip install -r requirements-dev.txt`
- Tests starten: `./.venv/bin/pytest` - Tests + Coverage starten: `./.venv/bin/pytest`
- XML Report: `./.venv/bin/pytest --cov-report=xml` - Optional (XML-Report): `./.venv/bin/pytest --cov-report=xml`
## Dokumentation ## Dokumentation
Siehe `docs/`. Siehe `docs/`.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1,5 +1,5 @@
<?xml version='1.0' encoding='utf-8'?> <?xml version="1.0" encoding="UTF-8"?>
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.61" provider-name="ViewIt"> <addon id="plugin.video.viewit" name="ViewIt" version="0.1.56" provider-name="ViewIt">
<requires> <requires>
<import addon="xbmc.python" version="3.0.0" /> <import addon="xbmc.python" version="3.0.0" />
<import addon="script.module.requests" /> <import addon="script.module.requests" />
@@ -10,8 +10,8 @@
<provides>video</provides> <provides>video</provides>
</extension> </extension>
<extension point="xbmc.addon.metadata"> <extension point="xbmc.addon.metadata">
<summary>Suche und Wiedergabe fuer mehrere Quellen</summary> <summary>ViewIt Kodi Plugin</summary>
<description>Findet Titel in unterstuetzten Quellen und startet Filme oder Episoden direkt in Kodi.</description> <description>Streaming-Addon für Streamingseiten: Suche, Staffeln/Episoden und Wiedergabe.</description>
<assets> <assets>
<icon>icon.png</icon> <icon>icon.png</icon>
</assets> </assets>

File diff suppressed because it is too large Load Diff

View File

@@ -32,12 +32,3 @@ def get_requests_session(key: str, *, headers: Optional[dict[str, str]] = None):
pass pass
return session return session
def close_all_sessions() -> None:
"""Close and clear all pooled sessions."""
for session in list(_SESSIONS.values()):
try:
session.close()
except Exception:
pass
_SESSIONS.clear()

View File

@@ -1,93 +0,0 @@
from __future__ import annotations
import re
from plugin_interface import BasisPlugin
from tmdb import TmdbCastMember
METADATA_MODE_AUTO = 0
METADATA_MODE_SOURCE = 1
METADATA_MODE_TMDB = 2
METADATA_MODE_MIX = 3
def metadata_setting_id(plugin_name: str) -> str:
safe = re.sub(r"[^a-z0-9]+", "_", (plugin_name or "").strip().casefold()).strip("_")
return f"{safe}_metadata_source" if safe else "metadata_source"
def plugin_supports_metadata(plugin: BasisPlugin) -> bool:
try:
return plugin.__class__.metadata_for is not BasisPlugin.metadata_for
except Exception:
return False
def metadata_policy(
plugin_name: str,
plugin: BasisPlugin,
*,
allow_tmdb: bool,
get_setting_int=None,
) -> tuple[bool, bool, bool]:
if not callable(get_setting_int):
return plugin_supports_metadata(plugin), allow_tmdb, bool(getattr(plugin, "prefer_source_metadata", False))
mode = get_setting_int(metadata_setting_id(plugin_name), default=METADATA_MODE_AUTO)
supports_source = plugin_supports_metadata(plugin)
if mode == METADATA_MODE_SOURCE:
return supports_source, False, True
if mode == METADATA_MODE_TMDB:
return False, allow_tmdb, False
if mode == METADATA_MODE_MIX:
return supports_source, allow_tmdb, True
prefer_source = bool(getattr(plugin, "prefer_source_metadata", False))
return supports_source, allow_tmdb, prefer_source
def collect_plugin_metadata(
plugin: BasisPlugin,
titles: list[str],
) -> dict[str, tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None]]:
getter = getattr(plugin, "metadata_for", None)
if not callable(getter):
return {}
collected: dict[str, tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None]] = {}
for title in titles:
try:
labels, art, cast = getter(title)
except Exception:
continue
if isinstance(labels, dict) or isinstance(art, dict) or cast:
label_map = {str(k): str(v) for k, v in dict(labels or {}).items() if v}
art_map = {str(k): str(v) for k, v in dict(art or {}).items() if v}
collected[title] = (label_map, art_map, cast if isinstance(cast, list) else None)
return collected
def needs_tmdb(labels: dict[str, str], art: dict[str, str], *, want_plot: bool, want_art: bool) -> bool:
if want_plot and not labels.get("plot"):
return True
if want_art and not (art.get("thumb") or art.get("poster") or art.get("fanart") or art.get("landscape")):
return True
return False
def merge_metadata(
title: str,
tmdb_labels: dict[str, str] | None,
tmdb_art: dict[str, str] | None,
tmdb_cast: list[TmdbCastMember] | None,
plugin_meta: tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None] | None,
) -> tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None]:
labels = dict(tmdb_labels or {})
art = dict(tmdb_art or {})
cast = tmdb_cast
if plugin_meta is not None:
meta_labels, meta_art, meta_cast = plugin_meta
labels.update({k: str(v) for k, v in dict(meta_labels or {}).items() if v})
art.update({k: str(v) for k, v in dict(meta_art or {}).items() if v})
if meta_cast is not None:
cast = meta_cast
if "title" not in labels:
labels["title"] = title
return labels, art, cast

View File

@@ -15,9 +15,7 @@ from __future__ import annotations
from datetime import datetime from datetime import datetime
import hashlib import hashlib
import os import os
import re
from typing import Optional from typing import Optional
from urllib.parse import parse_qsl, urlencode
try: # pragma: no cover - Kodi runtime try: # pragma: no cover - Kodi runtime
import xbmcaddon # type: ignore[import-not-found] import xbmcaddon # type: ignore[import-not-found]
@@ -239,40 +237,3 @@ def dump_response_html(
max_files = get_setting_int(addon_id, max_files_setting_id, default=200) max_files = get_setting_int(addon_id, max_files_setting_id, default=200)
_prune_dump_files(log_dir, prefix=filename_prefix, max_files=max_files) _prune_dump_files(log_dir, prefix=filename_prefix, max_files=max_files)
_append_text_file(path, content) _append_text_file(path, content)
def normalize_resolved_stream_url(final_url: str, *, source_url: str = "") -> str:
"""Normalisiert hoster-spezifische Header im finalen Stream-Link.
`final_url` kann ein Kodi-Header-Suffix enthalten: `url|Key=Value&...`.
Die Funktion passt nur bekannte Problemfaelle an und laesst sonst alles unveraendert.
"""
url = (final_url or "").strip()
if not url:
return ""
normalized = _normalize_supervideo_serversicuro(url, source_url=source_url)
return normalized
def _normalize_supervideo_serversicuro(final_url: str, *, source_url: str = "") -> str:
if "serversicuro.cc/hls/" not in final_url.casefold() or "|" not in final_url:
return final_url
source = (source_url or "").strip()
code_match = re.search(
r"supervideo\.(?:tv|cc)/(?:e/)?([a-z0-9]+)(?:\\.html)?",
source,
flags=re.IGNORECASE,
)
if not code_match:
return final_url
code = (code_match.group(1) or "").strip()
if not code:
return final_url
media_url, header_suffix = final_url.split("|", 1)
headers = dict(parse_qsl(header_suffix, keep_blank_values=True))
headers["Referer"] = f"https://supervideo.cc/e/{code}"
return f"{media_url}|{urlencode(headers)}"

View File

@@ -4,7 +4,7 @@
from __future__ import annotations from __future__ import annotations
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import Any, Callable, Dict, List, Optional, Set, Tuple from typing import Any, Dict, List, Optional, Set, Tuple
class BasisPlugin(ABC): class BasisPlugin(ABC):
@@ -15,11 +15,7 @@ class BasisPlugin(ABC):
prefer_source_metadata: bool = False prefer_source_metadata: bool = False
@abstractmethod @abstractmethod
async def search_titles( async def search_titles(self, query: str) -> List[str]:
self,
query: str,
progress_callback: Optional[Callable[[str, Optional[int]], Any]] = None,
) -> List[str]:
"""Liefert eine Liste aller Treffer fuer die Suche.""" """Liefert eine Liste aller Treffer fuer die Suche."""
@abstractmethod @abstractmethod

Binary file not shown.

View File

@@ -9,7 +9,7 @@ Zum Verwenden:
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, Callable, List, Optional from typing import TYPE_CHECKING, Any, List, Optional
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
import requests import requests
@@ -88,13 +88,9 @@ class TemplatePlugin(BasisPlugin):
self._session = session self._session = session
return self._session return self._session
async def search_titles( async def search_titles(self, query: str) -> List[str]:
self,
query: str,
progress_callback: Optional[Callable[[str, Optional[int]], Any]] = None,
) -> List[str]:
"""TODO: Suche auf der Zielseite implementieren.""" """TODO: Suche auf der Zielseite implementieren."""
_ = (query, progress_callback) _ = query
return [] return []
def seasons_for(self, title: str) -> List[str]: def seasons_for(self, title: str) -> List[str]:

View File

@@ -13,8 +13,7 @@ import hashlib
import json import json
import re import re
import time import time
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
from urllib.parse import quote
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
import requests import requests
@@ -70,16 +69,6 @@ HEADERS = {
SESSION_CACHE_TTL_SECONDS = 300 SESSION_CACHE_TTL_SECONDS = 300
SESSION_CACHE_PREFIX = "viewit.aniworld" SESSION_CACHE_PREFIX = "viewit.aniworld"
SESSION_CACHE_MAX_TITLE_URLS = 800 SESSION_CACHE_MAX_TITLE_URLS = 800
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
@dataclass @dataclass
@@ -137,7 +126,7 @@ def _latest_episodes_url() -> str:
def _search_url(query: str) -> str: def _search_url(query: str) -> str:
return f"{_get_base_url()}/search?q={quote((query or '').strip())}" return f"{_get_base_url()}/search?q={query}"
def _search_api_url() -> str: def _search_api_url() -> str:
@@ -300,56 +289,37 @@ def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> Beautif
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = session or get_requests_session("aniworld", headers=HEADERS) sess = session or get_requests_session("aniworld", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
raise raise
try: if response.url and response.url != url:
final_url = (response.url or url) if response is not None else url _log_url(response.url, kind="REDIRECT")
body = (response.text or "") if response is not None else "" _log_response_html(url, response.text)
if final_url != url: if _looks_like_cloudflare_challenge(response.text):
_log_url(final_url, kind="REDIRECT") raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
_log_response_html(url, body) return BeautifulSoup(response.text, "html.parser")
if _looks_like_cloudflare_challenge(body):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_html_simple(url: str) -> str: def _get_html_simple(url: str) -> str:
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = get_requests_session("aniworld", headers=HEADERS) sess = get_requests_session("aniworld", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
raise raise
try: if response.url and response.url != url:
final_url = (response.url or url) if response is not None else url _log_url(response.url, kind="REDIRECT")
body = (response.text or "") if response is not None else "" body = response.text
if final_url != url: _log_response_html(url, body)
_log_url(final_url, kind="REDIRECT") if _looks_like_cloudflare_challenge(body):
_log_response_html(url, body) raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
if _looks_like_cloudflare_challenge(body): return body
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
return body
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_soup_simple(url: str) -> BeautifulSoupT: def _get_soup_simple(url: str) -> BeautifulSoupT:
@@ -381,27 +351,17 @@ def _post_json(url: str, *, payload: Dict[str, str], session: Optional[RequestsS
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = session or get_requests_session("aniworld", headers=HEADERS) sess = session or get_requests_session("aniworld", headers=HEADERS)
response = None response = sess.post(url, data=payload, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status()
if response.url and response.url != url:
_log_url(response.url, kind="REDIRECT")
_log_response_html(url, response.text)
if _looks_like_cloudflare_challenge(response.text):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
try: try:
response = sess.post(url, data=payload, headers=HEADERS, timeout=DEFAULT_TIMEOUT) return response.json()
response.raise_for_status() except Exception:
final_url = (response.url or url) if response is not None else url return None
body = (response.text or "") if response is not None else ""
if final_url != url:
_log_url(final_url, kind="REDIRECT")
_log_response_html(url, body)
if _looks_like_cloudflare_challenge(body):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
try:
return response.json()
except Exception:
return None
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _extract_canonical_url(soup: BeautifulSoupT, fallback: str) -> str: def _extract_canonical_url(soup: BeautifulSoupT, fallback: str) -> str:
@@ -595,18 +555,10 @@ def resolve_redirect(target_url: str) -> Optional[str]:
_log_visit(normalized_url) _log_visit(normalized_url)
session = get_requests_session("aniworld", headers=HEADERS) session = get_requests_session("aniworld", headers=HEADERS)
_get_soup(_get_base_url(), session=session) _get_soup(_get_base_url(), session=session)
response = None response = session.get(normalized_url, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True)
try: if response.url:
response = session.get(normalized_url, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True) _log_url(response.url, kind="RESOLVED")
if response.url: return response.url if response.url else None
_log_url(response.url, kind="RESOLVED")
return response.url if response.url else None
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def fetch_episode_hoster_names(episode_url: str) -> List[str]: def fetch_episode_hoster_names(episode_url: str) -> List[str]:
@@ -677,12 +629,11 @@ def fetch_episode_stream_link(
return resolved return resolved
def search_animes(query: str, *, progress_callback: ProgressCallback = None) -> List[SeriesResult]: def search_animes(query: str) -> List[SeriesResult]:
_ensure_requests() _ensure_requests()
query = (query or "").strip() query = (query or "").strip()
if not query: if not query:
return [] return []
_emit_progress(progress_callback, "AniWorld API-Suche", 15)
session = get_requests_session("aniworld", headers=HEADERS) session = get_requests_session("aniworld", headers=HEADERS)
try: try:
session.get(_get_base_url(), headers=HEADERS, timeout=DEFAULT_TIMEOUT) session.get(_get_base_url(), headers=HEADERS, timeout=DEFAULT_TIMEOUT)
@@ -692,9 +643,7 @@ def search_animes(query: str, *, progress_callback: ProgressCallback = None) ->
results: List[SeriesResult] = [] results: List[SeriesResult] = []
seen: set[str] = set() seen: set[str] = set()
if isinstance(data, list): if isinstance(data, list):
for idx, entry in enumerate(data, start=1): for entry in data:
if idx == 1 or idx % 50 == 0:
_emit_progress(progress_callback, f"API auswerten {idx}/{len(data)}", 35)
if not isinstance(entry, dict): if not isinstance(entry, dict):
continue continue
title = _strip_html((entry.get("title") or "").strip()) title = _strip_html((entry.get("title") or "").strip())
@@ -716,16 +665,10 @@ def search_animes(query: str, *, progress_callback: ProgressCallback = None) ->
seen.add(key) seen.add(key)
description = (entry.get("description") or "").strip() description = (entry.get("description") or "").strip()
results.append(SeriesResult(title=title, description=description, url=url)) results.append(SeriesResult(title=title, description=description, url=url))
_emit_progress(progress_callback, f"API-Treffer: {len(results)}", 85)
return results return results
_emit_progress(progress_callback, "HTML-Suche (Fallback)", 55) soup = _get_soup_simple(_search_url(requests.utils.quote(query)))
soup = _get_soup_simple(_search_url(query)) for anchor in soup.select("a[href^='/anime/stream/'][href]"):
anchors = soup.select("a[href^='/anime/stream/'][href]")
total_anchors = max(1, len(anchors))
for idx, anchor in enumerate(anchors, start=1):
if idx == 1 or idx % 100 == 0:
_emit_progress(progress_callback, f"HTML auswerten {idx}/{total_anchors}", 70)
href = (anchor.get("href") or "").strip() href = (anchor.get("href") or "").strip()
if not href or "/staffel-" in href or "/episode-" in href: if not href or "/staffel-" in href or "/episode-" in href:
continue continue
@@ -743,7 +686,6 @@ def search_animes(query: str, *, progress_callback: ProgressCallback = None) ->
continue continue
seen.add(key) seen.add(key)
results.append(SeriesResult(title=title, description="", url=url)) results.append(SeriesResult(title=title, description="", url=url))
_emit_progress(progress_callback, f"HTML-Treffer: {len(results)}", 85)
return results return results
@@ -754,7 +696,6 @@ class AniworldPlugin(BasisPlugin):
def __init__(self) -> None: def __init__(self) -> None:
self._anime_results: Dict[str, SeriesResult] = {} self._anime_results: Dict[str, SeriesResult] = {}
self._title_url_cache: Dict[str, str] = self._load_title_url_cache() self._title_url_cache: Dict[str, str] = self._load_title_url_cache()
self._title_meta: Dict[str, tuple[str, str]] = {}
self._genre_names_cache: Optional[List[str]] = None self._genre_names_cache: Optional[List[str]] = None
self._season_cache: Dict[str, List[SeasonInfo]] = {} self._season_cache: Dict[str, List[SeasonInfo]] = {}
self._season_links_cache: Dict[str, List[SeasonInfo]] = {} self._season_links_cache: Dict[str, List[SeasonInfo]] = {}
@@ -819,135 +760,8 @@ class AniworldPlugin(BasisPlugin):
changed = True changed = True
if changed and persist: if changed and persist:
self._save_title_url_cache() self._save_title_url_cache()
if description:
old_plot, old_poster = self._title_meta.get(title, ("", ""))
self._title_meta[title] = (description.strip() or old_plot, old_poster)
return changed return changed
def _store_title_meta(self, title: str, *, plot: str = "", poster: str = "") -> None:
title = (title or "").strip()
if not title:
return
old_plot, old_poster = self._title_meta.get(title, ("", ""))
merged_plot = (plot or old_plot or "").strip()
merged_poster = (poster or old_poster or "").strip()
self._title_meta[title] = (merged_plot, merged_poster)
@staticmethod
def _is_series_image_url(url: str) -> bool:
value = (url or "").strip().casefold()
if not value:
return False
blocked = (
"/public/img/facebook",
"/public/img/logo",
"aniworld-logo",
"favicon",
"/public/img/german.svg",
"/public/img/japanese-",
)
return not any(marker in value for marker in blocked)
@staticmethod
def _extract_style_url(style_value: str) -> str:
style_value = (style_value or "").strip()
if not style_value:
return ""
match = re.search(r"url\((['\"]?)(.*?)\1\)", style_value, flags=re.IGNORECASE)
if not match:
return ""
return (match.group(2) or "").strip()
def _extract_series_metadata(self, soup: BeautifulSoupT) -> tuple[str, str, str]:
if not soup:
return "", "", ""
plot = ""
poster = ""
fanart = ""
root = soup.select_one("#series") or soup
description_node = root.select_one("p.seri_des")
if description_node is not None:
full_text = (description_node.get("data-full-description") or "").strip()
short_text = (description_node.get_text(" ", strip=True) or "").strip()
plot = full_text or short_text
if not plot:
for selector in ("meta[property='og:description']", "meta[name='description']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
plot = content
break
if not plot:
for selector in (".series-description", ".seri_des", ".description", "article p"):
node = soup.select_one(selector)
if node is None:
continue
text = (node.get_text(" ", strip=True) or "").strip()
if text:
plot = text
break
cover = root.select_one("div.seriesCoverBox img[itemprop='image'], div.seriesCoverBox img")
if cover is not None:
for attr in ("data-src", "src"):
value = (cover.get(attr) or "").strip()
if value:
candidate = _absolute_url(value)
if self._is_series_image_url(candidate):
poster = candidate
break
if not poster:
for selector in ("meta[property='og:image']", "meta[name='twitter:image']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
candidate = _absolute_url(content)
if self._is_series_image_url(candidate):
poster = candidate
break
if not poster:
for selector in ("img.seriesCoverBox", ".seriesCoverBox img"):
image = soup.select_one(selector)
if image is None:
continue
value = (image.get("data-src") or image.get("src") or "").strip()
if value:
candidate = _absolute_url(value)
if self._is_series_image_url(candidate):
poster = candidate
break
backdrop_node = root.select_one("section.title .backdrop, .SeriesSection .backdrop, .backdrop")
if backdrop_node is not None:
raw_style = (backdrop_node.get("style") or "").strip()
style_url = self._extract_style_url(raw_style)
if style_url:
candidate = _absolute_url(style_url)
if self._is_series_image_url(candidate):
fanart = candidate
if not fanart:
for selector in ("meta[property='og:image']",):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
candidate = _absolute_url(content)
if self._is_series_image_url(candidate):
fanart = candidate
break
return plot, poster, fanart
@staticmethod @staticmethod
def _season_links_cache_name(series_url: str) -> str: def _season_links_cache_name(series_url: str) -> str:
digest = hashlib.sha1((series_url or "").encode("utf-8")).hexdigest()[:20] digest = hashlib.sha1((series_url or "").encode("utf-8")).hexdigest()[:20]
@@ -1079,43 +893,6 @@ class AniworldPlugin(BasisPlugin):
return None return None
def metadata_for(self, title: str) -> tuple[dict[str, str], dict[str, str], list[object] | None]:
title = (title or "").strip()
if not title:
return {}, {}, None
info: dict[str, str] = {"title": title}
art: dict[str, str] = {}
cached_plot, cached_poster = self._title_meta.get(title, ("", ""))
if cached_plot:
info["plot"] = cached_plot
if cached_poster:
art = {"thumb": cached_poster, "poster": cached_poster}
if "plot" in info and art:
return info, art, None
series = self._find_series_by_title(title)
if series is None or not series.url:
return info, art, None
if series.description and "plot" not in info:
info["plot"] = series.description
try:
soup = _get_soup(series.url, session=get_requests_session("aniworld", headers=HEADERS))
plot, poster, fanart = self._extract_series_metadata(soup)
except Exception:
plot, poster, fanart = "", "", ""
if plot:
info["plot"] = plot
if poster:
art = {"thumb": poster, "poster": poster}
if fanart:
art["fanart"] = fanart
art["landscape"] = fanart
self._store_title_meta(title, plot=info.get("plot", ""), poster=poster)
return info, art, None
def _ensure_popular(self) -> List[SeriesResult]: def _ensure_popular(self) -> List[SeriesResult]:
if self._popular_cache is not None: if self._popular_cache is not None:
return list(self._popular_cache) return list(self._popular_cache)
@@ -1374,7 +1151,7 @@ class AniworldPlugin(BasisPlugin):
return self._episode_label_cache.get(cache_key, {}).get(episode_label) return self._episode_label_cache.get(cache_key, {}).get(episode_label)
return None return None
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]: async def search_titles(self, query: str) -> List[str]:
query = (query or "").strip() query = (query or "").strip()
if not query: if not query:
self._anime_results.clear() self._anime_results.clear()
@@ -1386,8 +1163,7 @@ class AniworldPlugin(BasisPlugin):
if not self._requests_available: if not self._requests_available:
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 nicht suchen.") raise RuntimeError("AniworldPlugin kann ohne requests/bs4 nicht suchen.")
try: try:
_emit_progress(progress_callback, "AniWorld Suche startet", 10) results = search_animes(query)
results = search_animes(query, progress_callback=progress_callback)
except Exception as exc: # pragma: no cover except Exception as exc: # pragma: no cover
self._anime_results.clear() self._anime_results.clear()
self._season_cache.clear() self._season_cache.clear()
@@ -1402,7 +1178,6 @@ class AniworldPlugin(BasisPlugin):
self._season_cache.clear() self._season_cache.clear()
self._season_links_cache.clear() self._season_links_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
_emit_progress(progress_callback, f"Treffer aufbereitet: {len(results)}", 95)
return [result.title for result in results] return [result.title for result in results]
def _ensure_seasons(self, title: str) -> List[SeasonInfo]: def _ensure_seasons(self, title: str) -> List[SeasonInfo]:

View File

@@ -5,7 +5,7 @@ from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
import re import re
from urllib.parse import quote from urllib.parse import quote
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional from typing import TYPE_CHECKING, Any, Dict, List, Optional
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
import requests import requests
@@ -44,16 +44,6 @@ SETTING_LOG_URLS = "log_urls_dokustreams"
SETTING_DUMP_HTML = "dump_html_dokustreams" SETTING_DUMP_HTML = "dump_html_dokustreams"
SETTING_SHOW_URL_INFO = "show_url_info_dokustreams" SETTING_SHOW_URL_INFO = "show_url_info_dokustreams"
SETTING_LOG_ERRORS = "log_errors_dokustreams" SETTING_LOG_ERRORS = "log_errors_dokustreams"
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
HEADERS = { HEADERS = {
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)", "User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
@@ -223,26 +213,16 @@ def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> Beautif
raise RuntimeError("requests/bs4 sind nicht verfuegbar.") raise RuntimeError("requests/bs4 sind nicht verfuegbar.")
_log_visit(url) _log_visit(url)
sess = session or get_requests_session("dokustreams", headers=HEADERS) sess = session or get_requests_session("dokustreams", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error_message(f"GET {url} failed: {exc}") _log_error_message(f"GET {url} failed: {exc}")
raise raise
try: if response.url and response.url != url:
final_url = (response.url or url) if response is not None else url _log_url_event(response.url, kind="REDIRECT")
body = (response.text or "") if response is not None else "" _log_response_html(url, response.text)
if final_url != url: return BeautifulSoup(response.text, "html.parser")
_log_url_event(final_url, kind="REDIRECT")
_log_response_html(url, body)
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
class DokuStreamsPlugin(BasisPlugin): class DokuStreamsPlugin(BasisPlugin):
@@ -267,17 +247,14 @@ class DokuStreamsPlugin(BasisPlugin):
if REQUESTS_IMPORT_ERROR: if REQUESTS_IMPORT_ERROR:
print(f"DokuStreamsPlugin Importfehler: {REQUESTS_IMPORT_ERROR}") print(f"DokuStreamsPlugin Importfehler: {REQUESTS_IMPORT_ERROR}")
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]: async def search_titles(self, query: str) -> List[str]:
_emit_progress(progress_callback, "Doku-Streams Suche", 15)
hits = self._search_hits(query) hits = self._search_hits(query)
_emit_progress(progress_callback, f"Treffer verarbeiten ({len(hits)})", 70)
self._title_to_url = {hit.title: hit.url for hit in hits if hit.title and hit.url} self._title_to_url = {hit.title: hit.url for hit in hits if hit.title and hit.url}
for hit in hits: for hit in hits:
if hit.title: if hit.title:
self._title_meta[hit.title] = (hit.plot, hit.poster) self._title_meta[hit.title] = (hit.plot, hit.poster)
titles = [hit.title for hit in hits if hit.title] titles = [hit.title for hit in hits if hit.title]
titles.sort(key=lambda value: value.casefold()) titles.sort(key=lambda value: value.casefold())
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles return titles
def _search_hits(self, query: str) -> List[SearchHit]: def _search_hits(self, query: str) -> List[SearchHit]:

View File

@@ -11,7 +11,7 @@ from __future__ import annotations
import json import json
import re import re
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any, Callable, Dict, List, Optional, Set from typing import Any, Dict, List, Optional, Set
from urllib.parse import urlencode, urljoin, urlsplit from urllib.parse import urlencode, urljoin, urlsplit
try: # pragma: no cover - optional dependency (Kodi dependency) try: # pragma: no cover - optional dependency (Kodi dependency)
@@ -56,16 +56,6 @@ HEADERS = {
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8", "Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
"Connection": "keep-alive", "Connection": "keep-alive",
} }
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -536,34 +526,6 @@ class EinschaltenPlugin(BasisPlugin):
self._session = requests.Session() self._session = requests.Session()
return self._session return self._session
def _http_get_text(self, url: str, *, timeout: int = 20) -> tuple[str, str]:
_log_url(url, kind="GET")
_notify_url(url)
sess = self._get_session()
response = None
try:
response = sess.get(url, headers=HEADERS, timeout=timeout)
response.raise_for_status()
final_url = (response.url or url) if response is not None else url
body = (response.text or "") if response is not None else ""
_log_url(final_url, kind="OK")
_log_response_html(final_url, body)
return final_url, body
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _http_get_json(self, url: str, *, timeout: int = 20) -> tuple[str, Any]:
final_url, body = self._http_get_text(url, timeout=timeout)
try:
payload = json.loads(body or "{}")
except Exception:
payload = {}
return final_url, payload
def _get_base_url(self) -> str: def _get_base_url(self) -> str:
base = _get_setting_text(SETTING_BASE_URL, default=DEFAULT_BASE_URL).strip() base = _get_setting_text(SETTING_BASE_URL, default=DEFAULT_BASE_URL).strip()
return base.rstrip("/") return base.rstrip("/")
@@ -603,6 +565,15 @@ class EinschaltenPlugin(BasisPlugin):
url = urljoin(base + "/", path.lstrip("/")) url = urljoin(base + "/", path.lstrip("/"))
return f"{url}?{urlencode({'query': query})}" return f"{url}?{urlencode({'query': query})}"
def _api_movies_url(self, *, with_genres: int, page: int = 1) -> str:
base = self._get_base_url()
if not base:
return ""
params: Dict[str, str] = {"withGenres": str(int(with_genres))}
if page and int(page) > 1:
params["page"] = str(int(page))
return urljoin(base + "/", "api/movies") + f"?{urlencode(params)}"
def _genre_page_url(self, *, genre_id: int, page: int = 1) -> str: def _genre_page_url(self, *, genre_id: int, page: int = 1) -> str:
"""Genre title pages are rendered server-side and embed the movie list in ng-state. """Genre title pages are rendered server-side and embed the movie list in ng-state.
@@ -675,9 +646,15 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return "" return ""
try: try:
_, body = self._http_get_text(url, timeout=20) _log_url(url, kind="GET")
self._detail_html_by_id[movie_id] = body _notify_url(url)
return body sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
self._detail_html_by_id[movie_id] = resp.text or ""
return resp.text or ""
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
return "" return ""
@@ -690,8 +667,16 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return {} return {}
try: try:
_, data = self._http_get_json(url, timeout=20) _log_url(url, kind="GET")
return data _notify_url(url)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
# Some backends may return JSON with a JSON content-type; for debugging we still dump text.
_log_response_html(resp.url or url, resp.text)
data = resp.json()
return dict(data) if isinstance(data, dict) else {}
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
return {} return {}
@@ -756,12 +741,41 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return [] return []
try: try:
_, body = self._http_get_text(url, timeout=20) _log_url(url, kind="GET")
payload = _extract_ng_state_payload(body) _notify_url(url)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
return _parse_ng_state_movies(payload) return _parse_ng_state_movies(payload)
except Exception: except Exception:
return [] return []
def _fetch_new_titles_movies(self) -> List[MovieItem]:
# "Neue Filme" lives at `/movies/new` and embeds the list in ng-state (`u: "/api/movies"`).
url = self._new_titles_url()
if not url:
return []
try:
_log_url(url, kind="GET")
_notify_url(url)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
movies = _parse_ng_state_movies(payload)
_log_debug_line(f"parse_ng_state_movies:count={len(movies)}")
if movies:
_log_titles(movies, context="new_titles")
return movies
return []
except Exception:
return []
def _fetch_new_titles_movies_page(self, page: int) -> List[MovieItem]: def _fetch_new_titles_movies_page(self, page: int) -> List[MovieItem]:
page = max(1, int(page or 1)) page = max(1, int(page or 1))
url = self._new_titles_url() url = self._new_titles_url()
@@ -770,8 +784,14 @@ class EinschaltenPlugin(BasisPlugin):
if page > 1: if page > 1:
url = f"{url}?{urlencode({'page': str(page)})}" url = f"{url}?{urlencode({'page': str(page)})}"
try: try:
_, body = self._http_get_text(url, timeout=20) _log_url(url, kind="GET")
payload = _extract_ng_state_payload(body) _notify_url(url)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
movies, has_more, current_page = _parse_ng_state_movies_with_pagination(payload) movies, has_more, current_page = _parse_ng_state_movies_with_pagination(payload)
_log_debug_line(f"parse_ng_state_movies_page:page={page} count={len(movies)}") _log_debug_line(f"parse_ng_state_movies_page:page={page} count={len(movies)}")
if has_more is not None: if has_more is not None:
@@ -824,8 +844,14 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return [] return []
try: try:
_, body = self._http_get_text(url, timeout=20) _log_url(url, kind="GET")
payload = _extract_ng_state_payload(body) _notify_url(url)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
results = _parse_ng_state_search_results(payload) results = _parse_ng_state_search_results(payload)
return _filter_movies_by_title(query, results) return _filter_movies_by_title(query, results)
except Exception: except Exception:
@@ -841,7 +867,13 @@ class EinschaltenPlugin(BasisPlugin):
api_url = self._api_genres_url() api_url = self._api_genres_url()
if api_url: if api_url:
try: try:
_, payload = self._http_get_json(api_url, timeout=20) _log_url(api_url, kind="GET")
_notify_url(api_url)
sess = self._get_session()
resp = sess.get(api_url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or api_url, kind="OK")
payload = resp.json()
if isinstance(payload, list): if isinstance(payload, list):
parsed: Dict[str, int] = {} parsed: Dict[str, int] = {}
for item in payload: for item in payload:
@@ -868,8 +900,14 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return return
try: try:
_, body = self._http_get_text(url, timeout=20) _log_url(url, kind="GET")
payload = _extract_ng_state_payload(body) _notify_url(url)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
parsed = _parse_ng_state_genres(payload) parsed = _parse_ng_state_genres(payload)
if parsed: if parsed:
self._genre_id_by_name.clear() self._genre_id_by_name.clear()
@@ -877,7 +915,7 @@ class EinschaltenPlugin(BasisPlugin):
except Exception: except Exception:
return return
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]: async def search_titles(self, query: str) -> List[str]:
if not REQUESTS_AVAILABLE: if not REQUESTS_AVAILABLE:
return [] return []
query = (query or "").strip() query = (query or "").strip()
@@ -886,12 +924,9 @@ class EinschaltenPlugin(BasisPlugin):
if not self._get_base_url(): if not self._get_base_url():
return [] return []
_emit_progress(progress_callback, "Einschalten Suche", 15)
movies = self._fetch_search_movies(query) movies = self._fetch_search_movies(query)
if not movies: if not movies:
_emit_progress(progress_callback, "Fallback: Index filtern", 45)
movies = _filter_movies_by_title(query, self._load_movies()) movies = _filter_movies_by_title(query, self._load_movies())
_emit_progress(progress_callback, f"Treffer verarbeiten ({len(movies)})", 75)
titles: List[str] = [] titles: List[str] = []
seen: set[str] = set() seen: set[str] = set()
for movie in movies: for movie in movies:
@@ -901,7 +936,6 @@ class EinschaltenPlugin(BasisPlugin):
self._id_by_title[movie.title] = movie.id self._id_by_title[movie.title] = movie.id
titles.append(movie.title) titles.append(movie.title)
titles.sort(key=lambda value: value.casefold()) titles.sort(key=lambda value: value.casefold())
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles return titles
def genres(self) -> List[str]: def genres(self) -> List[str]:
@@ -937,8 +971,14 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return [] return []
try: try:
_, body = self._http_get_text(url, timeout=20) _log_url(url, kind="GET")
payload = _extract_ng_state_payload(body) _notify_url(url)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
except Exception: except Exception:
return [] return []
if not isinstance(payload, dict): if not isinstance(payload, dict):

View File

@@ -11,7 +11,7 @@ from dataclasses import dataclass
import re import re
from urllib.parse import quote, urlencode from urllib.parse import quote, urlencode
from urllib.parse import urljoin from urllib.parse import urljoin
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
import requests import requests
@@ -53,16 +53,6 @@ SETTING_LOG_URLS = "log_urls_filmpalast"
SETTING_DUMP_HTML = "dump_html_filmpalast" SETTING_DUMP_HTML = "dump_html_filmpalast"
SETTING_SHOW_URL_INFO = "show_url_info_filmpalast" SETTING_SHOW_URL_INFO = "show_url_info_filmpalast"
SETTING_LOG_ERRORS = "log_errors_filmpalast" SETTING_LOG_ERRORS = "log_errors_filmpalast"
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
HEADERS = { HEADERS = {
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)", "User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
@@ -216,26 +206,16 @@ def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> Beautif
raise RuntimeError("requests/bs4 sind nicht verfuegbar.") raise RuntimeError("requests/bs4 sind nicht verfuegbar.")
_log_visit(url) _log_visit(url)
sess = session or get_requests_session("filmpalast", headers=HEADERS) sess = session or get_requests_session("filmpalast", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error_message(f"GET {url} failed: {exc}") _log_error_message(f"GET {url} failed: {exc}")
raise raise
try: if response.url and response.url != url:
final_url = (response.url or url) if response is not None else url _log_url_event(response.url, kind="REDIRECT")
body = (response.text or "") if response is not None else "" _log_response_html(url, response.text)
if final_url != url: return BeautifulSoup(response.text, "html.parser")
_log_url_event(final_url, kind="REDIRECT")
_log_response_html(url, body)
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
class FilmpalastPlugin(BasisPlugin): class FilmpalastPlugin(BasisPlugin):
@@ -244,7 +224,6 @@ class FilmpalastPlugin(BasisPlugin):
def __init__(self) -> None: def __init__(self) -> None:
self._title_to_url: Dict[str, str] = {} self._title_to_url: Dict[str, str] = {}
self._title_meta: Dict[str, tuple[str, str]] = {}
self._series_entries: Dict[str, Dict[int, Dict[int, EpisodeEntry]]] = {} self._series_entries: Dict[str, Dict[int, Dict[int, EpisodeEntry]]] = {}
self._hoster_cache: Dict[str, Dict[str, str]] = {} self._hoster_cache: Dict[str, Dict[str, str]] = {}
self._genre_to_url: Dict[str, str] = {} self._genre_to_url: Dict[str, str] = {}
@@ -373,7 +352,6 @@ class FilmpalastPlugin(BasisPlugin):
seen_titles: set[str] = set() seen_titles: set[str] = set()
seen_urls: set[str] = set() seen_urls: set[str] = set()
for base_url, params in search_requests: for base_url, params in search_requests:
response = None
try: try:
request_url = base_url if not params else f"{base_url}?{urlencode(params)}" request_url = base_url if not params else f"{base_url}?{urlencode(params)}"
_log_url_event(request_url, kind="GET") _log_url_event(request_url, kind="GET")
@@ -387,12 +365,6 @@ class FilmpalastPlugin(BasisPlugin):
except Exception as exc: except Exception as exc:
_log_error_message(f"search request failed ({base_url}): {exc}") _log_error_message(f"search request failed ({base_url}): {exc}")
continue continue
finally:
if response is not None:
try:
response.close()
except Exception:
pass
anchors = soup.select("article.liste h2 a[href], article.liste h3 a[href]") anchors = soup.select("article.liste h2 a[href], article.liste h3 a[href]")
if not anchors: if not anchors:
@@ -494,13 +466,9 @@ class FilmpalastPlugin(BasisPlugin):
titles.sort(key=lambda value: value.casefold()) titles.sort(key=lambda value: value.casefold())
return titles return titles
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]: async def search_titles(self, query: str) -> List[str]:
_emit_progress(progress_callback, "Filmpalast Suche", 15)
hits = self._search_hits(query) hits = self._search_hits(query)
_emit_progress(progress_callback, f"Treffer verarbeiten ({len(hits)})", 70) return self._apply_hits_to_title_index(hits)
titles = self._apply_hits_to_title_index(hits)
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles
def _parse_genres(self, soup: BeautifulSoupT) -> Dict[str, str]: def _parse_genres(self, soup: BeautifulSoupT) -> Dict[str, str]:
genres: Dict[str, str] = {} genres: Dict[str, str] = {}
@@ -723,64 +691,6 @@ class FilmpalastPlugin(BasisPlugin):
return hit.url return hit.url
return "" return ""
def _store_title_meta(self, title: str, *, plot: str = "", poster: str = "") -> None:
title = (title or "").strip()
if not title:
return
old_plot, old_poster = self._title_meta.get(title, ("", ""))
merged_plot = (plot or old_plot or "").strip()
merged_poster = (poster or old_poster or "").strip()
self._title_meta[title] = (merged_plot, merged_poster)
def _extract_detail_metadata(self, soup: BeautifulSoupT) -> tuple[str, str]:
if not soup:
return "", ""
root = soup.select_one("div#content[role='main']") or soup
detail = root.select_one("article.detail") or root
plot = ""
poster = ""
# Filmpalast Detailseite: bevorzugt den dedizierten Filmhandlung-Block.
plot_node = detail.select_one(
"li[itemtype='http://schema.org/Movie'] span[itemprop='description']"
)
if plot_node is not None:
plot = (plot_node.get_text(" ", strip=True) or "").strip()
if not plot:
hidden_plot = detail.select_one("cite span.hidden")
if hidden_plot is not None:
plot = (hidden_plot.get_text(" ", strip=True) or "").strip()
if not plot:
for selector in ("meta[property='og:description']", "meta[name='description']"):
node = root.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
plot = content
break
# Filmpalast Detailseite: Cover liegt stabil in `img.cover2`.
cover = detail.select_one("img.cover2")
if cover is not None:
value = (cover.get("data-src") or cover.get("src") or "").strip()
if value:
candidate = _absolute_url(value)
lower = candidate.casefold()
if "/themes/" not in lower and "spacer.gif" not in lower and "/files/movies/" in lower:
poster = candidate
if not poster:
thumb_node = detail.select_one("li[itemtype='http://schema.org/Movie'] img[itemprop='image']")
if thumb_node is not None:
value = (thumb_node.get("data-src") or thumb_node.get("src") or "").strip()
if value:
candidate = _absolute_url(value)
lower = candidate.casefold()
if "/themes/" not in lower and "spacer.gif" not in lower and "/files/movies/" in lower:
poster = candidate
return plot, poster
def remember_series_url(self, title: str, series_url: str) -> None: def remember_series_url(self, title: str, series_url: str) -> None:
title = (title or "").strip() title = (title or "").strip()
series_url = (series_url or "").strip() series_url = (series_url or "").strip()
@@ -801,52 +711,6 @@ class FilmpalastPlugin(BasisPlugin):
return _series_hint_value(series_key) return _series_hint_value(series_key)
return "" return ""
def metadata_for(self, title: str) -> tuple[dict[str, str], dict[str, str], list[object] | None]:
title = (title or "").strip()
if not title:
return {}, {}, None
info: dict[str, str] = {"title": title}
art: dict[str, str] = {}
cached_plot, cached_poster = self._title_meta.get(title, ("", ""))
if cached_plot:
info["plot"] = cached_plot
if cached_poster:
art = {"thumb": cached_poster, "poster": cached_poster}
if "plot" in info and art:
return info, art, None
detail_url = self._ensure_title_url(title)
if not detail_url:
series_key = self._series_key_for_title(title) or self._ensure_series_entries_for_title(title)
if series_key:
seasons = self._series_entries.get(series_key, {})
first_entry: Optional[EpisodeEntry] = None
for season_number in sorted(seasons.keys()):
episodes = seasons.get(season_number, {})
for episode_number in sorted(episodes.keys()):
first_entry = episodes.get(episode_number)
if first_entry is not None:
break
if first_entry is not None:
break
detail_url = first_entry.url if first_entry is not None else ""
if not detail_url:
return info, art, None
try:
soup = _get_soup(detail_url, session=get_requests_session("filmpalast", headers=HEADERS))
plot, poster = self._extract_detail_metadata(soup)
except Exception:
plot, poster = "", ""
if plot:
info["plot"] = plot
if poster:
art = {"thumb": poster, "poster": poster}
self._store_title_meta(title, plot=info.get("plot", ""), poster=poster)
return info, art, None
def is_movie(self, title: str) -> bool: def is_movie(self, title: str) -> bool:
title = (title or "").strip() title = (title or "").strip()
if not title: if not title:
@@ -1049,7 +913,6 @@ class FilmpalastPlugin(BasisPlugin):
redirected = link redirected = link
if self._requests_available: if self._requests_available:
response = None
try: try:
session = get_requests_session("filmpalast", headers=HEADERS) session = get_requests_session("filmpalast", headers=HEADERS)
response = session.get(link, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True) response = session.get(link, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True)
@@ -1057,12 +920,6 @@ class FilmpalastPlugin(BasisPlugin):
redirected = (response.url or link).strip() or link redirected = (response.url or link).strip() or link
except Exception: except Exception:
redirected = link redirected = link
finally:
if response is not None:
try:
response.close()
except Exception:
pass
# 2) Danach optional die Redirect-URL nochmals auflösen. # 2) Danach optional die Redirect-URL nochmals auflösen.
if callable(resolve_with_resolveurl) and redirected and redirected != link: if callable(resolve_with_resolveurl) and redirected and redirected != link:

View File

@@ -17,7 +17,7 @@ import os
import re import re
import time import time
import unicodedata import unicodedata
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
from urllib.parse import quote from urllib.parse import quote
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
@@ -80,16 +80,6 @@ SESSION_CACHE_MAX_TITLE_URLS = 800
CATALOG_SEARCH_TTL_SECONDS = 600 CATALOG_SEARCH_TTL_SECONDS = 600
CATALOG_SEARCH_CACHE_KEY = "catalog_index" CATALOG_SEARCH_CACHE_KEY = "catalog_index"
_CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, []) _CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, [])
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
@dataclass @dataclass
@@ -408,56 +398,37 @@ def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> Beautif
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = session or get_requests_session("serienstream", headers=HEADERS) sess = session or get_requests_session("serienstream", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
raise raise
try: if response.url and response.url != url:
final_url = (response.url or url) if response is not None else url _log_url(response.url, kind="REDIRECT")
body = (response.text or "") if response is not None else "" _log_response_html(url, response.text)
if final_url != url: if _looks_like_cloudflare_challenge(response.text):
_log_url(final_url, kind="REDIRECT") raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
_log_response_html(url, body) return BeautifulSoup(response.text, "html.parser")
if _looks_like_cloudflare_challenge(body):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_html_simple(url: str) -> str: def _get_html_simple(url: str) -> str:
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = get_requests_session("serienstream", headers=HEADERS) sess = get_requests_session("serienstream", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
raise raise
try: if response.url and response.url != url:
final_url = (response.url or url) if response is not None else url _log_url(response.url, kind="REDIRECT")
body = (response.text or "") if response is not None else "" body = response.text
if final_url != url: _log_response_html(url, body)
_log_url(final_url, kind="REDIRECT") if _looks_like_cloudflare_challenge(body):
_log_response_html(url, body) raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
if _looks_like_cloudflare_challenge(body): return body
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
return body
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_soup_simple(url: str) -> BeautifulSoupT: def _get_soup_simple(url: str) -> BeautifulSoupT:
@@ -501,7 +472,6 @@ def _search_series_api(query: str) -> List[SeriesResult]:
terms.extend([token for token in query.split() if token]) terms.extend([token for token in query.split() if token])
seen_urls: set[str] = set() seen_urls: set[str] = set()
for term in terms: for term in terms:
response = None
try: try:
response = sess.get( response = sess.get(
f"{_get_base_url()}/api/search/suggest", f"{_get_base_url()}/api/search/suggest",
@@ -516,12 +486,6 @@ def _search_series_api(query: str) -> List[SeriesResult]:
payload = response.json() payload = response.json()
except Exception: except Exception:
continue continue
finally:
if response is not None:
try:
response.close()
except Exception:
pass
shows = payload.get("shows") if isinstance(payload, dict) else None shows = payload.get("shows") if isinstance(payload, dict) else None
if not isinstance(shows, list): if not isinstance(shows, list):
continue continue
@@ -594,7 +558,7 @@ def _search_series_server(query: str) -> List[SeriesResult]:
return [] return []
def _extract_catalog_index_from_html(body: str, *, progress_callback: ProgressCallback = None) -> List[SeriesResult]: def _extract_catalog_index_from_html(body: str) -> List[SeriesResult]:
items: List[SeriesResult] = [] items: List[SeriesResult] = []
if not body: if not body:
return items return items
@@ -605,9 +569,7 @@ def _extract_catalog_index_from_html(body: str, *, progress_callback: ProgressCa
) )
anchor_re = re.compile(r"<a[^>]+href=[\"']([^\"']+)[\"'][^>]*>(.*?)</a>", re.IGNORECASE | re.DOTALL) anchor_re = re.compile(r"<a[^>]+href=[\"']([^\"']+)[\"'][^>]*>(.*?)</a>", re.IGNORECASE | re.DOTALL)
data_search_re = re.compile(r"data-search=[\"']([^\"']*)[\"']", re.IGNORECASE) data_search_re = re.compile(r"data-search=[\"']([^\"']*)[\"']", re.IGNORECASE)
for idx, match in enumerate(item_re.finditer(body), start=1): for match in item_re.finditer(body):
if idx == 1 or idx % 200 == 0:
_emit_progress(progress_callback, f"Katalog parsen {idx}", 62)
block = match.group(0) block = match.group(0)
inner = match.group(1) or "" inner = match.group(1) or ""
anchor_match = anchor_re.search(inner) anchor_match = anchor_re.search(inner)
@@ -689,33 +651,26 @@ def _store_catalog_index_in_cache(items: List[SeriesResult]) -> None:
_session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS) _session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS)
def search_series(query: str, *, progress_callback: ProgressCallback = None) -> List[SeriesResult]: def search_series(query: str) -> List[SeriesResult]:
"""Sucht Serien im (/serien)-Katalog nach Titel. Nutzt Cache + Ein-Pass-Filter.""" """Sucht Serien im (/serien)-Katalog nach Titel. Nutzt Cache + Ein-Pass-Filter."""
_ensure_requests() _ensure_requests()
if not _normalize_search_text(query): if not _normalize_search_text(query):
return [] return []
_emit_progress(progress_callback, "Server-Suche", 15)
server_results = _search_series_server(query) server_results = _search_series_server(query)
if server_results: if server_results:
_emit_progress(progress_callback, f"Server-Treffer: {len(server_results)}", 35)
return [entry for entry in server_results if entry.title and _matches_query(query, title=entry.title)] return [entry for entry in server_results if entry.title and _matches_query(query, title=entry.title)]
_emit_progress(progress_callback, "Pruefe Such-Cache", 42)
cached = _load_catalog_index_from_cache() cached = _load_catalog_index_from_cache()
if cached is not None: if cached is not None:
_emit_progress(progress_callback, f"Cache-Treffer: {len(cached)}", 52)
return [entry for entry in cached if entry.title and _matches_query(query, title=entry.title)] return [entry for entry in cached if entry.title and _matches_query(query, title=entry.title)]
_emit_progress(progress_callback, "Lade Katalogseite", 58)
catalog_url = f"{_get_base_url()}/serien?by=genre" catalog_url = f"{_get_base_url()}/serien?by=genre"
body = _get_html_simple(catalog_url) body = _get_html_simple(catalog_url)
items = _extract_catalog_index_from_html(body, progress_callback=progress_callback) items = _extract_catalog_index_from_html(body)
if not items: if not items:
_emit_progress(progress_callback, "Fallback-Parser", 70)
soup = BeautifulSoup(body, "html.parser") soup = BeautifulSoup(body, "html.parser")
items = _catalog_index_from_soup(soup) items = _catalog_index_from_soup(soup)
if items: if items:
_store_catalog_index_in_cache(items) _store_catalog_index_in_cache(items)
_emit_progress(progress_callback, f"Filtere Treffer ({len(items)})", 85)
return [entry for entry in items if entry.title and _matches_query(query, title=entry.title)] return [entry for entry in items if entry.title and _matches_query(query, title=entry.title)]
@@ -1034,23 +989,15 @@ def resolve_redirect(target_url: str) -> Optional[str]:
_get_soup(_get_base_url(), session=session) _get_soup(_get_base_url(), session=session)
except Exception: except Exception:
pass pass
response = None response = session.get(
try: normalized_url,
response = session.get( headers=HEADERS,
normalized_url, timeout=DEFAULT_TIMEOUT,
headers=HEADERS, allow_redirects=True,
timeout=DEFAULT_TIMEOUT, )
allow_redirects=True, if response.url:
) _log_url(response.url, kind="RESOLVED")
if response.url: return response.url if response.url else None
_log_url(response.url, kind="RESOLVED")
return response.url if response.url else None
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def scrape_series_detail( def scrape_series_detail(
@@ -1096,7 +1043,7 @@ class SerienstreamPlugin(BasisPlugin):
name = "Serienstream" name = "Serienstream"
version = "1.0.0" version = "1.0.0"
POPULAR_GENRE_LABEL = "Haeufig gesehen" POPULAR_GENRE_LABEL = "⭐ Beliebte Serien"
def __init__(self) -> None: def __init__(self) -> None:
self._series_results: Dict[str, SeriesResult] = {} self._series_results: Dict[str, SeriesResult] = {}
@@ -1734,7 +1681,7 @@ class SerienstreamPlugin(BasisPlugin):
return self._episode_label_cache.get(cache_key, {}).get(episode_label) return self._episode_label_cache.get(cache_key, {}).get(episode_label)
return None return None
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]: async def search_titles(self, query: str) -> List[str]:
query = query.strip() query = query.strip()
if not query: if not query:
self._series_results.clear() self._series_results.clear()
@@ -1748,8 +1695,7 @@ class SerienstreamPlugin(BasisPlugin):
try: try:
# Nutzt den Katalog (/serien), der jetzt nach Genres gruppiert ist. # Nutzt den Katalog (/serien), der jetzt nach Genres gruppiert ist.
# Alternativ gäbe es ein Ajax-Endpoint, aber der ist nicht immer zuverlässig erreichbar. # Alternativ gäbe es ein Ajax-Endpoint, aber der ist nicht immer zuverlässig erreichbar.
_emit_progress(progress_callback, "Serienstream Suche startet", 10) results = search_series(query)
results = search_series(query, progress_callback=progress_callback)
except Exception as exc: # pragma: no cover - defensive logging except Exception as exc: # pragma: no cover - defensive logging
self._series_results.clear() self._series_results.clear()
self._season_cache.clear() self._season_cache.clear()
@@ -1762,7 +1708,6 @@ class SerienstreamPlugin(BasisPlugin):
self._season_cache.clear() self._season_cache.clear()
self._season_links_cache.clear() self._season_links_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
_emit_progress(progress_callback, f"Treffer aufbereitet: {len(results)}", 95)
return [result.title for result in results] return [result.title for result in results]
def _ensure_seasons(self, title: str) -> List[SeasonInfo]: def _ensure_seasons(self, title: str) -> List[SeasonInfo]:

View File

@@ -19,7 +19,7 @@ import hashlib
import os import os
import re import re
import json import json
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional from typing import TYPE_CHECKING, Any, Dict, List, Optional
from urllib.parse import urlencode, urljoin from urllib.parse import urlencode, urljoin
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
@@ -66,25 +66,18 @@ SETTING_LOG_URLS = "log_urls_topstreamfilm"
SETTING_DUMP_HTML = "dump_html_topstreamfilm" SETTING_DUMP_HTML = "dump_html_topstreamfilm"
SETTING_SHOW_URL_INFO = "show_url_info_topstreamfilm" SETTING_SHOW_URL_INFO = "show_url_info_topstreamfilm"
SETTING_LOG_ERRORS = "log_errors_topstreamfilm" SETTING_LOG_ERRORS = "log_errors_topstreamfilm"
SETTING_GENRE_MAX_PAGES = "topstream_genre_max_pages"
DEFAULT_TIMEOUT = 20 DEFAULT_TIMEOUT = 20
DEFAULT_PREFERRED_HOSTERS = ["supervideo", "dropload", "voe"] DEFAULT_PREFERRED_HOSTERS = ["supervideo", "dropload", "voe"]
MEINECLOUD_HOST = "meinecloud.click" MEINECLOUD_HOST = "meinecloud.click"
DEFAULT_GENRE_MAX_PAGES = 20
HARD_MAX_GENRE_PAGES = 200
HEADERS = { HEADERS = {
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)", "User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8", "Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
"Connection": "keep-alive", "Connection": "keep-alive",
} }
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -94,7 +87,6 @@ class SearchHit:
title: str title: str
url: str url: str
description: str = "" description: str = ""
poster: str = ""
def _normalize_search_text(value: str) -> str: def _normalize_search_text(value: str) -> str:
@@ -147,7 +139,6 @@ class TopstreamfilmPlugin(BasisPlugin):
self._season_to_episode_numbers: Dict[tuple[str, str], List[int]] = {} self._season_to_episode_numbers: Dict[tuple[str, str], List[int]] = {}
self._episode_title_by_number: Dict[tuple[str, int, int], str] = {} self._episode_title_by_number: Dict[tuple[str, int, int], str] = {}
self._detail_html_cache: Dict[str, str] = {} self._detail_html_cache: Dict[str, str] = {}
self._title_meta: Dict[str, tuple[str, str]] = {}
self._popular_cache: List[str] | None = None self._popular_cache: List[str] | None = None
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS) self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
self._preferred_hosters: List[str] = list(self._default_preferred_hosters) self._preferred_hosters: List[str] = list(self._default_preferred_hosters)
@@ -344,6 +335,22 @@ class TopstreamfilmPlugin(BasisPlugin):
return urljoin(base if base.endswith("/") else base + "/", href) return urljoin(base if base.endswith("/") else base + "/", href)
return href return href
def _get_setting_bool(self, setting_id: str, *, default: bool = False) -> bool:
return get_setting_bool(ADDON_ID, setting_id, default=default)
def _get_setting_int(self, setting_id: str, *, default: int) -> int:
if xbmcaddon is None:
return default
try:
addon = xbmcaddon.Addon(ADDON_ID)
getter = getattr(addon, "getSettingInt", None)
if callable(getter):
return int(getter(setting_id))
raw = str(addon.getSetting(setting_id) or "").strip()
return int(raw) if raw else default
except Exception:
return default
def _notify_url(self, url: str) -> None: def _notify_url(self, url: str) -> None:
notify_url( notify_url(
ADDON_ID, ADDON_ID,
@@ -412,7 +419,6 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
seen.add(hit.title) seen.add(hit.title)
self._title_to_url[hit.title] = hit.url self._title_to_url[hit.title] = hit.url
self._store_title_meta(hit.title, plot=hit.description, poster=hit.poster)
titles.append(hit.title) titles.append(hit.title)
if titles: if titles:
self._save_title_url_cache() self._save_title_url_cache()
@@ -471,69 +477,6 @@ class TopstreamfilmPlugin(BasisPlugin):
except Exception: except Exception:
return "" return ""
def _pick_image_from_node(self, node: Any) -> str:
if node is None:
return ""
image = node.select_one("img")
if image is None:
return ""
for attr in ("data-src", "src"):
value = (image.get(attr) or "").strip()
if value and "lazy_placeholder" not in value.casefold():
return self._absolute_external_url(value, base=self._get_base_url())
srcset = (image.get("data-srcset") or image.get("srcset") or "").strip()
if srcset:
first = srcset.split(",")[0].strip().split(" ", 1)[0].strip()
if first:
return self._absolute_external_url(first, base=self._get_base_url())
return ""
def _store_title_meta(self, title: str, *, plot: str = "", poster: str = "") -> None:
title = (title or "").strip()
if not title:
return
old_plot, old_poster = self._title_meta.get(title, ("", ""))
merged_plot = (plot or old_plot or "").strip()
merged_poster = (poster or old_poster or "").strip()
self._title_meta[title] = (merged_plot, merged_poster)
def _extract_detail_metadata(self, soup: BeautifulSoupT) -> tuple[str, str]:
if not soup:
return "", ""
plot = ""
poster = ""
for selector in ("meta[property='og:description']", "meta[name='description']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
plot = content
break
if not plot:
candidates: list[str] = []
for paragraph in soup.select("article p, .TPost p, .Description p, .entry-content p"):
text = (paragraph.get_text(" ", strip=True) or "").strip()
if len(text) >= 60:
candidates.append(text)
if candidates:
plot = max(candidates, key=len)
for selector in ("meta[property='og:image']", "meta[name='twitter:image']"):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
poster = self._absolute_external_url(content, base=self._get_base_url())
break
if not poster:
for selector in ("article", ".TPost", ".entry-content"):
poster = self._pick_image_from_node(soup.select_one(selector))
if poster:
break
return plot, poster
def _clear_stream_index_for_title(self, title: str) -> None: def _clear_stream_index_for_title(self, title: str) -> None:
for key in list(self._season_to_episode_numbers.keys()): for key in list(self._season_to_episode_numbers.keys()):
if key[0] == title: if key[0] == title:
@@ -641,25 +584,15 @@ class TopstreamfilmPlugin(BasisPlugin):
session = self._get_session() session = self._get_session()
self._log_url(url, kind="VISIT") self._log_url(url, kind="VISIT")
self._notify_url(url) self._notify_url(url)
response = None
try: try:
response = session.get(url, timeout=DEFAULT_TIMEOUT) response = session.get(url, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
self._log_error(f"GET {url} failed: {exc}") self._log_error(f"GET {url} failed: {exc}")
raise raise
try: self._log_url(response.url, kind="OK")
final_url = (response.url or url) if response is not None else url self._log_response_html(response.url, response.text)
body = (response.text or "") if response is not None else "" return BeautifulSoup(response.text, "html.parser")
self._log_url(final_url, kind="OK")
self._log_response_html(final_url, body)
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_detail_soup(self, title: str) -> Optional[BeautifulSoupT]: def _get_detail_soup(self, title: str) -> Optional[BeautifulSoupT]:
title = (title or "").strip() title = (title or "").strip()
@@ -768,17 +701,7 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
if is_movie_hint: if is_movie_hint:
self._movie_title_hint.add(title) self._movie_title_hint.add(title)
description_tag = item.select_one(".TPMvCn .Description, .Description, .entry-summary") hits.append(SearchHit(title=title, url=self._absolute_url(href), description=""))
description = (description_tag.get_text(" ", strip=True) or "").strip() if description_tag else ""
poster = self._pick_image_from_node(item)
hits.append(
SearchHit(
title=title,
url=self._absolute_url(href),
description=description,
poster=poster,
)
)
return hits return hits
def is_movie(self, title: str) -> bool: def is_movie(self, title: str) -> bool:
@@ -851,7 +774,6 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
seen.add(hit.title) seen.add(hit.title)
self._title_to_url[hit.title] = hit.url self._title_to_url[hit.title] = hit.url
self._store_title_meta(hit.title, plot=hit.description, poster=hit.poster)
titles.append(hit.title) titles.append(hit.title)
if titles: if titles:
self._save_title_url_cache() self._save_title_url_cache()
@@ -892,7 +814,7 @@ class TopstreamfilmPlugin(BasisPlugin):
# Sonst: Serie via Streams-Accordion parsen (falls vorhanden). # Sonst: Serie via Streams-Accordion parsen (falls vorhanden).
self._parse_stream_accordion(soup, title=title) self._parse_stream_accordion(soup, title=title)
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]: async def search_titles(self, query: str) -> List[str]:
"""Sucht Titel ueber eine HTML-Suche. """Sucht Titel ueber eine HTML-Suche.
Erwartetes HTML (Snippet): Erwartetes HTML (Snippet):
@@ -905,7 +827,6 @@ class TopstreamfilmPlugin(BasisPlugin):
query = (query or "").strip() query = (query or "").strip()
if not query: if not query:
return [] return []
_emit_progress(progress_callback, "Topstreamfilm Suche", 15)
session = self._get_session() session = self._get_session()
url = self._get_base_url() + "/" url = self._get_base_url() + "/"
@@ -913,7 +834,6 @@ class TopstreamfilmPlugin(BasisPlugin):
request_url = f"{url}?{urlencode(params)}" request_url = f"{url}?{urlencode(params)}"
self._log_url(request_url, kind="GET") self._log_url(request_url, kind="GET")
self._notify_url(request_url) self._notify_url(request_url)
response = None
try: try:
response = session.get( response = session.get(
url, url,
@@ -924,28 +844,15 @@ class TopstreamfilmPlugin(BasisPlugin):
except Exception as exc: except Exception as exc:
self._log_error(f"GET {request_url} failed: {exc}") self._log_error(f"GET {request_url} failed: {exc}")
raise raise
try: self._log_url(response.url, kind="OK")
final_url = (response.url or request_url) if response is not None else request_url self._log_response_html(response.url, response.text)
body = (response.text or "") if response is not None else ""
self._log_url(final_url, kind="OK")
self._log_response_html(final_url, body)
if BeautifulSoup is None: if BeautifulSoup is None:
return [] return []
soup = BeautifulSoup(body, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
hits: List[SearchHit] = [] hits: List[SearchHit] = []
items = soup.select("li.TPostMv") for item in soup.select("li.TPostMv"):
total_items = max(1, len(items))
for idx, item in enumerate(items, start=1):
if idx == 1 or idx % 20 == 0:
_emit_progress(progress_callback, f"Treffer pruefen {idx}/{total_items}", 55)
anchor = item.select_one("a[href]") anchor = item.select_one("a[href]")
if not anchor: if not anchor:
continue continue
@@ -963,8 +870,7 @@ class TopstreamfilmPlugin(BasisPlugin):
self._movie_title_hint.add(title) self._movie_title_hint.add(title)
description_tag = item.select_one(".TPMvCn .Description") description_tag = item.select_one(".TPMvCn .Description")
description = description_tag.get_text(" ", strip=True) if description_tag else "" description = description_tag.get_text(" ", strip=True) if description_tag else ""
poster = self._pick_image_from_node(item) hit = SearchHit(title=title, url=self._absolute_url(href), description=description)
hit = SearchHit(title=title, url=self._absolute_url(href), description=description, poster=poster)
if _matches_query(query, title=hit.title, description=hit.description): if _matches_query(query, title=hit.title, description=hit.description):
hits.append(hit) hits.append(hit)
@@ -977,41 +883,10 @@ class TopstreamfilmPlugin(BasisPlugin):
continue continue
seen.add(hit.title) seen.add(hit.title)
self._title_to_url[hit.title] = hit.url self._title_to_url[hit.title] = hit.url
self._store_title_meta(hit.title, plot=hit.description, poster=hit.poster)
titles.append(hit.title) titles.append(hit.title)
self._save_title_url_cache() self._save_title_url_cache()
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles return titles
def metadata_for(self, title: str) -> tuple[dict[str, str], dict[str, str], list[object] | None]:
title = (title or "").strip()
if not title:
return {}, {}, None
info: dict[str, str] = {"title": title}
art: dict[str, str] = {}
cached_plot, cached_poster = self._title_meta.get(title, ("", ""))
if cached_plot:
info["plot"] = cached_plot
if cached_poster:
art = {"thumb": cached_poster, "poster": cached_poster}
if "plot" in info and art:
return info, art, None
soup = self._get_detail_soup(title)
if soup is None:
return info, art, None
plot, poster = self._extract_detail_metadata(soup)
if plot:
info["plot"] = plot
if poster:
art = {"thumb": poster, "poster": poster}
self._store_title_meta(title, plot=plot, poster=poster)
return info, art, None
def genres(self) -> List[str]: def genres(self) -> List[str]:
if not REQUESTS_AVAILABLE or BeautifulSoup is None: if not REQUESTS_AVAILABLE or BeautifulSoup is None:
return [] return []

View File

@@ -8,16 +8,8 @@ from __future__ import annotations
from typing import Optional from typing import Optional
_LAST_RESOLVE_ERROR = ""
def get_last_error() -> str:
return str(_LAST_RESOLVE_ERROR or "")
def resolve(url: str) -> Optional[str]: def resolve(url: str) -> Optional[str]:
global _LAST_RESOLVE_ERROR
_LAST_RESOLVE_ERROR = ""
if not url: if not url:
return None return None
try: try:
@@ -31,14 +23,12 @@ def resolve(url: str) -> Optional[str]:
hmf = hosted(url) hmf = hosted(url)
valid = getattr(hmf, "valid_url", None) valid = getattr(hmf, "valid_url", None)
if callable(valid) and not valid(): if callable(valid) and not valid():
_LAST_RESOLVE_ERROR = "invalid url"
return None return None
resolver = getattr(hmf, "resolve", None) resolver = getattr(hmf, "resolve", None)
if callable(resolver): if callable(resolver):
result = resolver() result = resolver()
return str(result) if result else None return str(result) if result else None
except Exception as exc: except Exception:
_LAST_RESOLVE_ERROR = str(exc or "")
pass pass
try: try:
@@ -46,8 +36,8 @@ def resolve(url: str) -> Optional[str]:
if callable(resolve_fn): if callable(resolve_fn):
result = resolve_fn(url) result = resolve_fn(url)
return str(result) if result else None return str(result) if result else None
except Exception as exc: except Exception:
_LAST_RESOLVE_ERROR = str(exc or "")
return None return None
return None return None

View File

@@ -1,99 +1,85 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<settings> <settings>
<category label="Quellen"> <category label="Logging">
<setting id="serienstream_base_url" type="text" label="SerienStream Basis-URL" default="https://s.to" /> <setting id="debug_log_urls" type="bool" label="URL-Logging aktivieren (global)" default="false" />
<setting id="aniworld_base_url" type="text" label="AniWorld Basis-URL" default="https://aniworld.to" /> <setting id="debug_dump_html" type="bool" label="HTML-Dumps aktivieren (global)" default="false" />
<setting id="topstream_base_url" type="text" label="TopStream Basis-URL" default="https://topstreamfilm.live" /> <setting id="debug_show_url_info" type="bool" label="URL-Info anzeigen (global)" default="false" />
<setting id="einschalten_base_url" type="text" label="Einschalten Basis-URL" default="https://einschalten.in" /> <setting id="debug_log_errors" type="bool" label="Fehler-Logging aktivieren (global)" default="false" />
<setting id="filmpalast_base_url" type="text" label="Filmpalast Basis-URL" default="https://filmpalast.to" /> <setting id="log_max_mb" type="number" label="URL-Log: max. Datei-Größe (MB)" default="5" />
<setting id="doku_streams_base_url" type="text" label="Doku-Streams Basis-URL" default="https://doku-streams.com" /> <setting id="log_max_files" type="number" label="URL-Log: max. Rotationen" default="3" />
<setting id="dump_max_files" type="number" label="HTML-Dumps: max. Dateien pro Plugin" default="200" />
<setting id="log_urls_serienstream" type="bool" label="Serienstream: URL-Logging" default="false" />
<setting id="dump_html_serienstream" type="bool" label="Serienstream: HTML-Dumps" default="false" />
<setting id="show_url_info_serienstream" type="bool" label="Serienstream: URL-Info anzeigen" default="false" />
<setting id="log_errors_serienstream" type="bool" label="Serienstream: Fehler loggen" default="false" />
<setting id="log_urls_aniworld" type="bool" label="Aniworld: URL-Logging" default="false" />
<setting id="dump_html_aniworld" type="bool" label="Aniworld: HTML-Dumps" default="false" />
<setting id="show_url_info_aniworld" type="bool" label="Aniworld: URL-Info anzeigen" default="false" />
<setting id="log_errors_aniworld" type="bool" label="Aniworld: Fehler loggen" default="false" />
<setting id="log_urls_topstreamfilm" type="bool" label="Topstreamfilm: URL-Logging" default="false" />
<setting id="dump_html_topstreamfilm" type="bool" label="Topstreamfilm: HTML-Dumps" default="false" />
<setting id="show_url_info_topstreamfilm" type="bool" label="Topstreamfilm: URL-Info anzeigen" default="false" />
<setting id="log_errors_topstreamfilm" type="bool" label="Topstreamfilm: Fehler loggen" default="false" />
<setting id="log_urls_einschalten" type="bool" label="Einschalten: URL-Logging" default="false" />
<setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML-Dumps" default="false" />
<setting id="show_url_info_einschalten" type="bool" label="Einschalten: URL-Info anzeigen" default="false" />
<setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler loggen" default="false" />
<setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URL-Logging" default="false" />
<setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML-Dumps" default="false" />
<setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: URL-Info anzeigen" default="false" />
<setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler loggen" default="false" />
</category> </category>
<category label="TopStream">
<category label="Metadaten"> <setting id="topstream_base_url" type="text" label="Domain (BASE_URL)" default="https://topstreamfilm.live" />
<setting id="serienstream_metadata_source" type="enum" label="SerienStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" /> <setting id="topstreamfilm_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Auto|Quelle|TMDB|Mix" />
<setting id="aniworld_metadata_source" type="enum" label="AniWorld Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" /> <setting id="topstream_genre_max_pages" type="number" label="Genres: max. Seiten laden (Pagination)" default="20" />
<setting id="topstreamfilm_metadata_source" type="enum" label="TopStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" /> </category>
<setting id="einschalten_metadata_source" type="enum" label="Einschalten Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" /> <category label="SerienStream">
<setting id="filmpalast_metadata_source" type="enum" label="Filmpalast Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" /> <setting id="serienstream_base_url" type="text" label="Domain (BASE_URL)" default="https://s.to" />
<setting id="doku_streams_metadata_source" type="enum" label="Doku-Streams Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" /> <setting id="serienstream_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Auto|Quelle|TMDB|Mix" />
</category>
<category label="AniWorld">
<setting id="aniworld_base_url" type="text" label="Domain (BASE_URL)" default="https://aniworld.to" />
<setting id="aniworld_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Auto|Quelle|TMDB|Mix" />
</category>
<category label="Einschalten">
<setting id="einschalten_base_url" type="text" label="Domain (BASE_URL)" default="https://einschalten.in" />
<setting id="einschalten_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Auto|Quelle|TMDB|Mix" />
</category>
<category label="Filmpalast">
<setting id="filmpalast_base_url" type="text" label="Domain (BASE_URL)" default="https://filmpalast.to" />
<setting id="filmpalast_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Auto|Quelle|TMDB|Mix" />
</category>
<category label="Doku-Streams">
<setting id="doku_streams_base_url" type="text" label="Domain (BASE_URL)" default="https://doku-streams.com" />
<setting id="doku_streams_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Auto|Quelle|TMDB|Mix" />
</category>
<category label="TMDB">
<setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" /> <setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" />
<setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" />
<setting id="tmdb_show_plot" type="bool" label="TMDB Beschreibung anzeigen" default="true" />
<setting id="tmdb_show_art" type="bool" label="TMDB Poster und Vorschaubild anzeigen" default="true" />
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
<setting id="tmdb_show_rating" type="bool" label="TMDB Bewertung anzeigen" default="true" />
<setting id="tmdb_show_votes" type="bool" label="TMDB Stimmen anzeigen" default="false" />
</category>
<category label="TMDB Erweitert">
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" /> <setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: gleichzeitige Anfragen (1-20)" default="6" /> <setting id="tmdb_language" type="text" label="TMDB Sprache (z.B. de-DE)" default="de-DE" />
<setting id="tmdb_show_cast" type="bool" label="TMDB Besetzung anzeigen" default="false" /> <setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: Parallelität (Prefetch, 1-20)" default="6" />
<setting id="tmdb_show_plot" type="bool" label="TMDB Plot anzeigen" default="true" />
<setting id="tmdb_show_art" type="bool" label="TMDB Poster/Thumb anzeigen" default="true" />
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
<setting id="tmdb_show_rating" type="bool" label="TMDB Rating anzeigen" default="true" />
<setting id="tmdb_show_votes" type="bool" label="TMDB Vote-Count anzeigen" default="false" />
<setting id="tmdb_show_cast" type="bool" label="TMDB Cast anzeigen" default="false" />
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" /> <setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
<setting id="tmdb_genre_metadata" type="bool" label="TMDB Daten in Genre-Listen anzeigen" default="false" /> <setting id="tmdb_genre_metadata" type="bool" label="TMDB Meta in Genre-Liste anzeigen" default="false" />
<setting id="tmdb_log_requests" type="bool" label="TMDB API-Anfragen loggen" default="false" /> <setting id="tmdb_log_requests" type="bool" label="TMDB API Requests loggen" default="false" />
<setting id="tmdb_log_responses" type="bool" label="TMDB API-Antworten loggen" default="false" /> <setting id="tmdb_log_responses" type="bool" label="TMDB API Antworten loggen" default="false" />
</category> </category>
<category label="Update">
<category label="Updates"> <setting id="update_repo_url" type="text" label="Update-URL (addons.xml)" default="http://127.0.0.1:8080/repo/addons.xml" />
<setting id="update_channel" type="enum" label="Update-Kanal" default="0" values="Main|Nightly|Custom" /> <setting id="run_update_check" type="action" label="Jetzt auf Updates pruefen" action="RunPlugin(plugin://plugin.video.viewit/?action=check_updates)" option="close" />
<setting id="apply_update_channel" type="action" label="Update-Kanal jetzt anwenden" action="RunPlugin(plugin://plugin.video.viewit/?action=apply_update_channel)" option="close" /> <setting id="update_info" type="text" label="Kodi-Repository-Updates werden ueber den Kodi-Update-Mechanismus verarbeitet." default="" enable="false" />
<setting id="auto_update_enabled" type="bool" label="Automatische Updates (beim Start pruefen)" default="false" /> <setting id="update_version_addon" type="text" label="ViewIT Addon Version" default="-" enable="false" />
<setting id="select_update_version" type="action" label="Version waehlen und installieren" action="RunPlugin(plugin://plugin.video.viewit/?action=select_update_version)" option="close" /> <setting id="update_version_serienstream" type="text" label="Serienstream Plugin Version" default="-" enable="false" />
<setting id="update_installed_version" type="text" label="Installierte Version" default="-" enable="false" /> <setting id="update_version_aniworld" type="text" label="Aniworld Plugin Version" default="-" enable="false" />
<setting id="update_available_selected" type="text" label="Verfuegbar (gewaehlter Kanal)" default="-" enable="false" /> <setting id="update_version_einschalten" type="text" label="Einschalten Plugin Version" default="-" enable="false" />
<setting id="update_available_main" type="text" label="Verfuegbar Main" default="-" enable="false" /> <setting id="update_version_topstreamfilm" type="text" label="Topstreamfilm Plugin Version" default="-" enable="false" />
<setting id="update_available_nightly" type="text" label="Verfuegbar Nightly" default="-" enable="false" /> <setting id="update_version_filmpalast" type="text" label="Filmpalast Plugin Version" default="-" enable="false" />
<setting id="update_active_channel" type="text" label="Aktiver Kanal" default="-" enable="false" /> <setting id="update_version_doku_streams" type="text" label="Doku-Streams Plugin Version" default="-" enable="false" />
<setting id="update_active_repo_url" type="text" label="Aktive Repo URL" default="-" enable="false" />
<setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" />
<setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
<setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
<setting id="auto_update_last_ts" type="text" label="Auto-Update letzte Pruefung (intern)" default="0" visible="false" />
<setting id="update_version_addon" type="text" label="ViewIT Version" default="-" visible="false" />
<setting id="update_version_serienstream" type="text" label="SerienStream Version" default="-" visible="false" />
<setting id="update_version_aniworld" type="text" label="AniWorld Version" default="-" visible="false" />
<setting id="update_version_einschalten" type="text" label="Einschalten Version" default="-" visible="false" />
<setting id="update_version_topstreamfilm" type="text" label="TopStream Version" default="-" visible="false" />
<setting id="update_version_filmpalast" type="text" label="Filmpalast Version" default="-" visible="false" />
<setting id="update_version_doku_streams" type="text" label="Doku-Streams Version" default="-" visible="false" />
</category>
<category label="Debug Global">
<setting id="debug_log_urls" type="bool" label="URLs mitschreiben (global)" default="false" />
<setting id="debug_dump_html" type="bool" label="HTML speichern (global)" default="false" />
<setting id="debug_show_url_info" type="bool" label="Aktuelle URL anzeigen (global)" default="false" />
<setting id="debug_log_errors" type="bool" label="Fehler mitschreiben (global)" default="false" />
<setting id="log_max_mb" type="number" label="URL-Log: maximale Dateigroesse (MB)" default="5" />
<setting id="log_max_files" type="number" label="URL-Log: Anzahl alter Dateien" default="3" />
<setting id="dump_max_files" type="number" label="HTML: maximale Dateien pro Plugin" default="200" />
</category>
<category label="Debug Quellen">
<setting id="log_urls_serienstream" type="bool" label="SerienStream: URLs mitschreiben" default="false" />
<setting id="dump_html_serienstream" type="bool" label="SerienStream: HTML speichern" default="false" />
<setting id="show_url_info_serienstream" type="bool" label="SerienStream: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_serienstream" type="bool" label="SerienStream: Fehler mitschreiben" default="false" />
<setting id="log_urls_aniworld" type="bool" label="AniWorld: URLs mitschreiben" default="false" />
<setting id="dump_html_aniworld" type="bool" label="AniWorld: HTML speichern" default="false" />
<setting id="show_url_info_aniworld" type="bool" label="AniWorld: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_aniworld" type="bool" label="AniWorld: Fehler mitschreiben" default="false" />
<setting id="log_urls_topstreamfilm" type="bool" label="TopStream: URLs mitschreiben" default="false" />
<setting id="dump_html_topstreamfilm" type="bool" label="TopStream: HTML speichern" default="false" />
<setting id="show_url_info_topstreamfilm" type="bool" label="TopStream: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_topstreamfilm" type="bool" label="TopStream: Fehler mitschreiben" default="false" />
<setting id="log_urls_einschalten" type="bool" label="Einschalten: URLs mitschreiben" default="false" />
<setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML speichern" default="false" />
<setting id="show_url_info_einschalten" type="bool" label="Einschalten: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler mitschreiben" default="false" />
<setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URLs mitschreiben" default="false" />
<setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML speichern" default="false" />
<setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler mitschreiben" default="false" />
</category> </category>
</settings> </settings>

View File

@@ -14,7 +14,6 @@ except ImportError: # pragma: no cover
TMDB_API_BASE = "https://api.themoviedb.org/3" TMDB_API_BASE = "https://api.themoviedb.org/3"
TMDB_IMAGE_BASE = "https://image.tmdb.org/t/p" TMDB_IMAGE_BASE = "https://image.tmdb.org/t/p"
MAX_CAST_MEMBERS = 30
_TMDB_THREAD_LOCAL = threading.local() _TMDB_THREAD_LOCAL = threading.local()
@@ -74,17 +73,53 @@ def _fetch_credits(
return [] return []
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/{kind}/{tmdb_id}/credits?{urlencode(params)}" url = f"{TMDB_API_BASE}/{kind}/{tmdb_id}/credits?{urlencode(params)}"
status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses) if callable(log):
log(f"TMDB GET {url}")
try:
response = requests.get(url, timeout=timeout)
except Exception as exc: # pragma: no cover
if callable(log):
log(f"TMDB ERROR /{kind}/{{id}}/credits request_failed error={exc!r}")
return []
status = getattr(response, "status_code", None)
if callable(log): if callable(log):
log(f"TMDB RESPONSE /{kind}/{{id}}/credits status={status}") log(f"TMDB RESPONSE /{kind}/{{id}}/credits status={status}")
if log_responses and payload is None and body_text: if status != 200:
log(f"TMDB RESPONSE_BODY /{kind}/{{id}}/credits body={body_text[:2000]}")
if status != 200 or not isinstance(payload, dict):
return [] return []
try:
payload = response.json() or {}
except Exception:
return []
if callable(log) and log_responses:
try:
dumped = json.dumps(payload, ensure_ascii=False)
except Exception:
dumped = str(payload)
log(f"TMDB RESPONSE_BODY /{kind}/{{id}}/credits body={dumped[:2000]}")
cast_payload = payload.get("cast") or [] cast_payload = payload.get("cast") or []
if callable(log): if callable(log):
log(f"TMDB CREDITS /{kind}/{{id}}/credits cast={len(cast_payload)}") log(f"TMDB CREDITS /{kind}/{{id}}/credits cast={len(cast_payload)}")
return _parse_cast_payload(cast_payload) with_images: List[TmdbCastMember] = []
without_images: List[TmdbCastMember] = []
for entry in cast_payload:
name = (entry.get("name") or "").strip()
role = (entry.get("character") or "").strip()
thumb = _image_url(entry.get("profile_path") or "", size="w185")
if not name:
continue
member = TmdbCastMember(name=name, role=role, thumb=thumb)
if thumb:
with_images.append(member)
else:
without_images.append(member)
# Viele Kodi-Skins zeigen bei fehlendem Thumbnail Platzhalter-Köpfe.
# Bevorzugt daher Cast-Einträge mit Bild; nur wenn gar keine Bilder existieren,
# geben wir Namen ohne Bild zurück.
if with_images:
return with_images[:30]
return without_images[:30]
def _parse_cast_payload(cast_payload: object) -> List[TmdbCastMember]: def _parse_cast_payload(cast_payload: object) -> List[TmdbCastMember]:
@@ -106,8 +141,8 @@ def _parse_cast_payload(cast_payload: object) -> List[TmdbCastMember]:
else: else:
without_images.append(member) without_images.append(member)
if with_images: if with_images:
return with_images[:MAX_CAST_MEMBERS] return with_images[:30]
return without_images[:MAX_CAST_MEMBERS] return without_images[:30]
def _tmdb_get_json( def _tmdb_get_json(
@@ -128,29 +163,23 @@ def _tmdb_get_json(
if callable(log): if callable(log):
log(f"TMDB GET {url}") log(f"TMDB GET {url}")
sess = session or _get_tmdb_session() or requests.Session() sess = session or _get_tmdb_session() or requests.Session()
response = None
try: try:
response = sess.get(url, timeout=timeout) response = sess.get(url, timeout=timeout)
status = getattr(response, "status_code", None)
payload: object | None = None
body_text = ""
try:
payload = response.json()
except Exception:
try:
body_text = (response.text or "").strip()
except Exception:
body_text = ""
except Exception as exc: # pragma: no cover except Exception as exc: # pragma: no cover
if callable(log): if callable(log):
log(f"TMDB ERROR request_failed url={url} error={exc!r}") log(f"TMDB ERROR request_failed url={url} error={exc!r}")
return None, None, "" return None, None, ""
finally:
if response is not None: status = getattr(response, "status_code", None)
try: payload: object | None = None
response.close() body_text = ""
except Exception: try:
pass payload = response.json()
except Exception:
try:
body_text = (response.text or "").strip()
except Exception:
body_text = ""
if callable(log): if callable(log):
log(f"TMDB RESPONSE status={status} url={url}") log(f"TMDB RESPONSE status={status} url={url}")
@@ -185,17 +214,49 @@ def fetch_tv_episode_credits(
return [] return []
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}/episode/{episode_number}/credits?{urlencode(params)}" url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}/episode/{episode_number}/credits?{urlencode(params)}"
status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses) if callable(log):
log(f"TMDB GET {url}")
try:
response = requests.get(url, timeout=timeout)
except Exception as exc: # pragma: no cover
if callable(log):
log(f"TMDB ERROR /tv/{{id}}/season/{{n}}/episode/{{e}}/credits request_failed error={exc!r}")
return []
status = getattr(response, "status_code", None)
if callable(log): if callable(log):
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}}/episode/{{e}}/credits status={status}") log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}}/episode/{{e}}/credits status={status}")
if log_responses and payload is None and body_text: if status != 200:
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}}/episode/{{e}}/credits body={body_text[:2000]}")
if status != 200 or not isinstance(payload, dict):
return [] return []
try:
payload = response.json() or {}
except Exception:
return []
if callable(log) and log_responses:
try:
dumped = json.dumps(payload, ensure_ascii=False)
except Exception:
dumped = str(payload)
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}}/episode/{{e}}/credits body={dumped[:2000]}")
cast_payload = payload.get("cast") or [] cast_payload = payload.get("cast") or []
if callable(log): if callable(log):
log(f"TMDB CREDITS /tv/{{id}}/season/{{n}}/episode/{{e}}/credits cast={len(cast_payload)}") log(f"TMDB CREDITS /tv/{{id}}/season/{{n}}/episode/{{e}}/credits cast={len(cast_payload)}")
return _parse_cast_payload(cast_payload) with_images: List[TmdbCastMember] = []
without_images: List[TmdbCastMember] = []
for entry in cast_payload:
name = (entry.get("name") or "").strip()
role = (entry.get("character") or "").strip()
thumb = _image_url(entry.get("profile_path") or "", size="w185")
if not name:
continue
member = TmdbCastMember(name=name, role=role, thumb=thumb)
if thumb:
with_images.append(member)
else:
without_images.append(member)
if with_images:
return with_images[:30]
return without_images[:30]
def lookup_tv_show( def lookup_tv_show(
@@ -485,13 +546,27 @@ def lookup_tv_season_summary(
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}" url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}"
status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses) if callable(log):
log(f"TMDB GET {url}")
try:
response = requests.get(url, timeout=timeout)
except Exception:
return None
status = getattr(response, "status_code", None)
if callable(log): if callable(log):
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status}") log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status}")
if log_responses and payload is None and body_text: if status != 200:
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}} body={body_text[:2000]}")
if status != 200 or not isinstance(payload, dict):
return None return None
try:
payload = response.json() or {}
except Exception:
return None
if callable(log) and log_responses:
try:
dumped = json.dumps(payload, ensure_ascii=False)
except Exception:
dumped = str(payload)
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}} body={dumped[:2000]}")
plot = (payload.get("overview") or "").strip() plot = (payload.get("overview") or "").strip()
poster_path = (payload.get("poster_path") or "").strip() poster_path = (payload.get("poster_path") or "").strip()
@@ -519,9 +594,27 @@ def lookup_tv_season(
return None return None
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}" url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}"
status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses) if callable(log):
episodes = (payload or {}).get("episodes") if isinstance(payload, dict) else [] log(f"TMDB GET {url}")
episodes = episodes or [] try:
response = requests.get(url, timeout=timeout)
except Exception as exc: # pragma: no cover
if callable(log):
log(f"TMDB ERROR /tv/{{id}}/season/{{n}} request_failed error={exc!r}")
return None
status = getattr(response, "status_code", None)
payload = None
body_text = ""
try:
payload = response.json() or {}
except Exception:
try:
body_text = (response.text or "").strip()
except Exception:
body_text = ""
episodes = (payload or {}).get("episodes") or []
if callable(log): if callable(log):
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status} episodes={len(episodes)}") log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status} episodes={len(episodes)}")
if log_responses: if log_responses:

View File

@@ -1,49 +1,55 @@
# ViewIT Hauptlogik (`addon/default.py`) # ViewIT Hauptlogik (`addon/default.py`)
Diese Datei ist der Router des Addons. Dieses Dokument beschreibt den Einstiegspunkt des Addons und die zentrale Steuerlogik.
Sie verbindet Kodi UI, Plugin Calls und Playback.
## Kernaufgabe ## Aufgabe der Datei
- Plugins laden `addon/default.py` ist der Router des Addons. Er:
- Menues bauen - lädt die PluginModule dynamisch,
- Aktionen auf Plugin Methoden mappen - stellt die KodiNavigation bereit,
- Playback starten - übersetzt UIAktionen in PluginAufrufe,
- Playstate speichern - startet die Wiedergabe und verwaltet Playstate/Resume.
## Ablauf ## Ablauf (high level)
1. Plugin Discovery fuer `addon/plugins/*.py` ohne `_` Prefix. 1. **PluginDiscovery**: Lädt alle `addon/plugins/*.py` (ohne `_`Prefix). Bevorzugt `Plugin = <Klasse>`, sonst werden `BasisPlugin`Subklassen deterministisch instanziiert.
2. Navigation fuer Titel, Staffeln und Episoden. 2. **Navigation**: Baut KodiListen (Serien/Staffeln/Episoden) auf Basis der PluginAntworten.
3. Playback: Link holen, optional aufloesen, abspielen. 3. **Playback**: Holt StreamLinks aus dem Plugin und startet die Wiedergabe.
4. Playstate: watched und resume in `playstate.json` schreiben. 4. **Playstate**: Speichert ResumeDaten lokal (`playstate.json`) und setzt `playcount`/ResumeInfos.
## Routing ## Routing & Aktionen
Der Router liest Query Parameter aus `sys.argv[2]`. Die Datei arbeitet mit URLParametern (KodiPluginStandard). Typische Aktionen:
Typische Aktionen: - `search` → Suche über ein Plugin
- `search` - `seasons` → Staffeln für einen Titel
- `seasons` - `episodes` → Episoden für eine Staffel
- `episodes` - `play` → StreamLink auflösen und abspielen
- `play_episode`
- `play_movie`
- `play_episode_url`
## Playstate Die genaue Aktion wird aus den QueryParametern gelesen und an das entsprechende Plugin delegiert.
- Speicherort: Addon Profilordner, Datei `playstate.json`
- Key: Plugin + Titel + Staffel + Episode
- Werte: watched, playcount, resume_position, resume_total
## Wichtige Helper ## Playstate (Resume/Watched)
- Plugin Loader und Discovery - **Speicherort**: `playstate.json` im AddonProfilordner.
- UI Builder fuer ListItems - **Key**: Kombination aus PluginName, Titel, Staffel, Episode.
- Playstate Load/Save/Merge - **Verwendung**:
- TMDB Merge mit Source Fallback - `playcount` wird gesetzt, wenn „gesehen“ markiert ist.
- `resume_position`/`resume_total` werden gesetzt, wenn vorhanden.
## Fehlerverhalten ## Wichtige Hilfsfunktionen
- Importfehler pro Plugin werden isoliert behandelt. - **PluginLoader**: findet & instanziiert Plugins.
- Fehler in einem Plugin sollen das Addon nicht stoppen. - **UIHelper**: setzt ContentType, baut Verzeichnisseinträge.
- User bekommt kurze Fehlermeldungen in Kodi. - **PlaystateHelper**: `_load_playstate`, `_save_playstate`, `_apply_playstate_to_info`.
- **MetadataMerge**: PluginMetadaten können TMDB übersteuern, TMDB dient als Fallback.
## Erweiterung ## Fehlerbehandlung
Fuer neue Aktion im Router: - PluginImportfehler werden isoliert behandelt, damit das Addon nicht komplett ausfällt.
1. Action im `run()` Handler registrieren. - NetzwerkFehler werden in Plugins abgefangen, `default.py` sollte nur saubere Fehlermeldungen weitergeben.
2. ListItem mit passenden Parametern bauen.
3. Zielmethode im Plugin bereitstellen. ## Debugging
- Globale DebugSettings werden über `addon/resources/settings.xml` gesteuert.
- Plugins loggen URLs/HTML optional (siehe jeweilige PluginDoku).
## Änderungen & Erweiterungen
Für neue Aktionen:
1. Neue Aktion im Router registrieren.
2. UIEinträge passend anlegen.
3. Entsprechende PluginMethode definieren oder erweitern.
## Hinweis zur Erstellung
Teile dieser Dokumentation wurden KIgestützt erstellt und bei Bedarf manuell angepasst.

View File

@@ -1,85 +1,118 @@
# ViewIT Plugin Entwicklung (`addon/plugins/*_plugin.py`) # ViewIT Entwicklerdoku Plugins (`addon/plugins/*_plugin.py`)
Diese Datei zeigt, wie Plugins im Projekt aufgebaut sind und wie sie mit dem Router zusammenarbeiten. Diese Doku beschreibt, wie Plugins im ViewITAddon aufgebaut sind und wie neue ProviderIntegrationen entwickelt werden.
## Grundlagen ## Grundlagen
- Ein Plugin ist eine Python Datei in `addon/plugins/`. - Jedes Plugin ist eine einzelne Datei unter `addon/plugins/`.
- Dateien mit `_` Prefix werden nicht geladen. - Dateinamen **ohne** `_`Prefix werden automatisch geladen.
- Plugin Klasse erbt von `BasisPlugin`. - Jede Datei enthält eine Klasse, die von `BasisPlugin` erbt.
- Optional: `Plugin = <Klasse>` als klarer Einstiegspunkt. - Optional: `Plugin = <Klasse>` als expliziter Einstiegspunkt (bevorzugt vom Loader).
## Pflichtmethoden ## PflichtMethoden (BasisPlugin)
Jedes Plugin implementiert: Jedes Plugin muss diese Methoden implementieren:
- `async search_titles(query: str) -> list[str]` - `async search_titles(query: str) -> list[str]`
- `seasons_for(title: str) -> list[str]` - `seasons_for(title: str) -> list[str]`
- `episodes_for(title: str, season: str) -> list[str]` - `episodes_for(title: str, season: str) -> list[str]`
## Wichtige optionale Methoden ## Vertrag Plugin ↔ Hauptlogik (`default.py`)
- `stream_link_for(...)` Die Hauptlogik ruft Plugin-Methoden auf und verarbeitet ausschließlich deren Rückgaben.
- `resolve_stream_link(...)`
- `metadata_for(...)`
- `available_hosters_for(...)`
- `series_url_for_title(...)`
- `remember_series_url(...)`
- `episode_url_for(...)`
- `available_hosters_for_url(...)`
- `stream_link_for_url(...)`
## Film Provider Standard Wesentliche Rückgaben an die Hauptlogik:
Wenn keine echten Staffeln existieren: - `search_titles(...)` → Liste von Titel-Strings für die Trefferliste
- `seasons_for(title)` gibt `['Film']` - `seasons_for(...)` → Liste von Staffel-Labels
- `episodes_for(title, 'Film')` gibt `['Stream']` - `episodes_for(...)` → Liste von Episoden-Labels
- `stream_link_for(...)` → Hoster-/Player-Link (nicht zwingend finale Media-URL)
- `resolve_stream_link(...)` → finale/spielbare URL nach Redirect/Resolver
- `metadata_for(...)` → Info-Labels/Art (Plot/Poster) aus der Quelle
- Optional `available_hosters_for(...)` → auswählbare Hoster-Namen im Dialog
- Optional `series_url_for_title(...)` → stabile Detail-URL pro Titel für Folgeaufrufe
- Optional `remember_series_url(...)` → Übernahme einer bereits bekannten Detail-URL
## Capabilities Standard für Film-Provider (ohne echte Staffeln):
Ein Plugin kann Features melden ueber `capabilities()`. - `seasons_for(title)` gibt `["Film"]` zurück
Bekannte Werte: - `episodes_for(title, "Film")` gibt `["Stream"]` zurück
- `popular_series`
- `genres`
- `latest_episodes`
- `new_titles`
- `alpha`
- `series_catalog`
## Suche ## Optionale Features (Capabilities)
Aktuelle Regeln fuer Suchtreffer: Über `capabilities()` kann das Plugin zusätzliche Funktionen anbieten:
- Match auf Titel - `popular_series``popular_series()`
- Wortbasiert - `genres``genres()` + `titles_for_genre(genre)`
- Keine Teilwort Treffer im selben Wort - `latest_episodes``latest_episodes(page=1)`
- Beschreibungen nicht fuer Match nutzen - `new_titles``new_titles_page(page=1)`
- `alpha``alpha_index()` + `titles_for_alpha_page(letter, page)`
- `series_catalog``series_catalog_page(page=1)`
## Settings ## Empfohlene Struktur
Pro Plugin meist `*_base_url`. - Konstanten für URLs/Endpoints (BASE_URL, Pfade, Templates)
Beispiele: - `requests` + `bs4` optional (fehlt beides, Plugin sollte sauber deaktivieren)
- `serienstream_base_url` - HelperFunktionen für Parsing und Normalisierung
- `aniworld_base_url` - Caches für Such, Staffel und EpisodenDaten
- `einschalten_base_url`
- `topstream_base_url`
- `filmpalast_base_url`
- `doku_streams_base_url`
## Playback Flow ## Suche (aktuelle Policy)
1. Episode oder Film auswaehlen. - **Nur TitelMatches**
2. Optional Hosterliste anzeigen. - **Wortbasierter Match** nach Normalisierung (Lowercase + NichtAlnum → Leerzeichen)
3. `stream_link_for` oder `stream_link_for_url` aufrufen. - Keine Teilwort-Treffer innerhalb eines Wortes (Beispiel: `hund` matcht nicht `thunder`)
4. `resolve_stream_link` aufrufen. - Keine Beschreibung/Plot/Meta für Matches
5. Finale URL an Kodi geben.
## Logging ## Namensgebung
Nutze Helper aus `addon/plugin_helpers.py`: - PluginKlassenname: `XxxPlugin`
- Anzeigename (Property `name`): **mit Großbuchstaben beginnen** (z.B. `Serienstream`, `Einschalten`)
## Settings pro Plugin
Standard: `*_base_url` (Domain / BASE_URL)
- Beispiele:
- `serienstream_base_url`
- `aniworld_base_url`
- `einschalten_base_url`
- `topstream_base_url`
- `filmpalast_base_url`
- `doku_streams_base_url`
## Playback
- `stream_link_for(...)` implementieren (liefert bevorzugten Hoster-Link).
- `available_hosters_for(...)` bereitstellen, wenn die Seite mehrere Hoster anbietet.
- `resolve_stream_link(...)` nach einheitlichem Flow umsetzen:
1. Redirects auflösen (falls vorhanden)
2. ResolveURL (`resolveurl_backend.resolve`) versuchen
3. Bei Fehlschlag auf den besten verfügbaren Link zurückfallen
- Optional `set_preferred_hosters(...)` unterstützen, damit die Hoster-Auswahl aus der Hauptlogik direkt greift.
## StandardFlow (empfohlen)
1. **Suche**: nur Titel liefern und Titel→Detail-URL mappen.
2. **Navigation**: `series_url_for_title`/`remember_series_url` unterstützen, damit URLs zwischen Aufrufen stabil bleiben.
3. **Auswahl Hoster**: Hoster-Namen aus der Detailseite extrahieren und anbieten.
4. **Playback**: Hoster-Link liefern, danach konsistent über `resolve_stream_link` finalisieren.
5. **Metadaten**: `metadata_for` nutzen, Plot/Poster aus der Quelle zurückgeben.
6. **Fallbacks**: bei Layout-Unterschieden defensiv parsen und Logging aktivierbar halten.
## Debugging
Global gesteuert über Settings:
- `debug_log_urls`
- `debug_dump_html`
- `debug_show_url_info`
Plugins sollten die Helper aus `addon/plugin_helpers.py` nutzen:
- `log_url(...)` - `log_url(...)`
- `dump_response_html(...)` - `dump_response_html(...)`
- `notify_url(...)` - `notify_url(...)`
## Build und Checks ## Template
- ZIP: `./scripts/build_kodi_zip.sh` `addon/plugins/_template_plugin.py` dient als Startpunkt für neue Provider.
- Addon Ordner: `./scripts/build_install_addon.sh`
- Manifest: `python3 scripts/generate_plugin_manifest.py`
- Snapshot Checks: `python3 qa/run_plugin_snapshots.py`
## Kurze Checkliste ## Build & Test
- `name` gesetzt und korrekt - ZIP bauen: `./scripts/build_kodi_zip.sh`
- `*_base_url` in Settings vorhanden - AddonOrdner: `./scripts/build_install_addon.sh`
- Suche liefert nur passende Titel - PluginManifest aktualisieren: `python3 scripts/generate_plugin_manifest.py`
- Playback Methoden vorhanden - Live-Snapshot-Checks: `python3 qa/run_plugin_snapshots.py` (aktualisieren mit `--update`)
- Fehler und Timeouts behandelt
- Cache nur da, wo er Zeit spart ## BeispielCheckliste
- [ ] `name` korrekt gesetzt
- [ ] `*_base_url` in Settings vorhanden
- [ ] Suche matcht nur Titel und wortbasiert
- [ ] `stream_link_for` + `resolve_stream_link` folgen dem Standard-Flow
- [ ] Optional: `available_hosters_for` + `set_preferred_hosters` vorhanden
- [ ] Optional: `series_url_for_title` + `remember_series_url` vorhanden
- [ ] Fehlerbehandlung und Timeouts vorhanden
- [ ] Optional: Caches für Performance
## Hinweis zur Erstellung
Teile dieser Dokumentation wurden KIgestützt erstellt und bei Bedarf manuell angepasst.

View File

@@ -1,71 +1,115 @@
# ViewIT Plugin System ## ViewIt Plugin-System
Dieses Dokument beschreibt Laden, Vertrag und Betrieb der Plugins. Dieses Dokument beschreibt, wie das Plugin-System von **ViewIt** funktioniert und wie die Community neue Integrationen hinzufügen kann.
## Ueberblick ### Überblick
Der Router laedt Provider Integrationen aus `addon/plugins/*.py`.
Aktive Plugins werden instanziiert und im UI genutzt.
Relevante Dateien: ViewIt lädt Provider-Integrationen dynamisch aus `addon/plugins/*.py`. Jede Datei enthält eine Klasse, die von `BasisPlugin` erbt. Beim Start werden alle Plugins instanziiert und nur aktiv genutzt, wenn sie verfügbar sind.
- `addon/default.py`
- `addon/plugin_interface.py`
- `docs/DEFAULT_ROUTER.md`
- `docs/PLUGIN_DEVELOPMENT.md`
## Aktuelle Plugins Weitere Details:
- `serienstream_plugin.py` - `docs/DEFAULT_ROUTER.md` (Hauptlogik in `addon/default.py`)
- `topstreamfilm_plugin.py` - `docs/PLUGIN_DEVELOPMENT.md` (Entwicklerdoku für Plugins)
- `einschalten_plugin.py` - `docs/PLUGIN_MANIFEST.json` (zentraler Überblick über Plugins, Versionen, Capabilities)
- `aniworld_plugin.py`
- `filmpalast_plugin.py`
- `dokustreams_plugin.py`
- `_template_plugin.py` (Vorlage)
## Discovery Ablauf ### Aktuelle Plugins
In `addon/default.py`:
1. Finde `*.py` in `addon/plugins/`
2. Ueberspringe Dateien mit `_` Prefix
3. Importiere Modul
4. Nutze `Plugin = <Klasse>`, falls vorhanden
5. Sonst instanziiere `BasisPlugin` Subklassen deterministisch
6. Ueberspringe Plugins mit `is_available = False`
## Basis Interface - `serienstream_plugin.py` Serienstream (s.to)
`BasisPlugin` definiert den Kern: - `topstreamfilm_plugin.py` Topstreamfilm
- `search_titles` - `einschalten_plugin.py` Einschalten
- `seasons_for` - `aniworld_plugin.py` Aniworld
- `episodes_for` - `filmpalast_plugin.py` Filmpalast
- `dokustreams_plugin.py` Doku-Streams
- `_template_plugin.py` Vorlage für neue Plugins
Weitere Methoden sind optional und werden nur genutzt, wenn vorhanden. ### Plugin-Discovery (Ladeprozess)
## Capabilities Der Loader in `addon/default.py`:
Plugins koennen Features aktiv melden.
Typische Werte:
- `popular_series`
- `genres`
- `latest_episodes`
- `new_titles`
- `alpha`
- `series_catalog`
Das UI zeigt nur Menues fuer aktiv gemeldete Features. 1. Sucht alle `*.py` in `addon/plugins/`
2. Überspringt Dateien, die mit `_` beginnen
3. Lädt Module dynamisch
4. Nutzt `Plugin = <Klasse>` als bevorzugten Einstiegspunkt (falls vorhanden)
5. Fallback: instanziert Klassen, die von `BasisPlugin` erben (deterministisch sortiert)
6. Ignoriert Plugins mit `is_available = False`
## Metadaten Quelle Damit bleiben fehlerhafte Plugins isoliert und blockieren nicht das gesamte Add-on.
`prefer_source_metadata = True` bedeutet:
- Quelle zuerst
- TMDB nur Fallback
## Stabilitaet ### Plugin-Manifest (Audit & Repro)
- Keine Netz Calls im Import Block. `docs/PLUGIN_MANIFEST.json` listet alle Plugins mit Version, Capabilities und Basis-Settings.
- Fehler im Plugin muessen lokal behandelt werden. Erzeugung: `python3 scripts/generate_plugin_manifest.py`
- Ein defektes Plugin darf andere Plugins nicht blockieren.
## Build ### BasisPlugin verpflichtende Methoden
Kodi ZIP bauen:
```bash Definiert in `addon/plugin_interface.py`:
- `async search_titles(query: str) -> list[str]`
- `seasons_for(title: str) -> list[str]`
- `episodes_for(title: str, season: str) -> list[str]`
- optional `metadata_for(title: str) -> (info_labels, art, cast)`
### Optionale Features (Capabilities)
Plugins können zusätzliche Features anbieten:
- `capabilities() -> set[str]`
- `popular_series`: liefert beliebte Serien
- `genres`: Genre-Liste verfügbar
- `latest_episodes`: neue Episoden verfügbar
- `new_titles`: neue Titel verfügbar
- `alpha`: A-Z Index verfügbar
- `series_catalog`: Serienkatalog verfügbar
- `popular_series() -> list[str]`
- `genres() -> list[str]`
- `titles_for_genre(genre: str) -> list[str]`
- `latest_episodes(page: int = 1) -> list[LatestEpisode]` (wenn angeboten)
- `new_titles_page(page: int = 1) -> list[str]` (wenn angeboten)
- `alpha_index() -> list[str]` (wenn angeboten)
- `series_catalog_page(page: int = 1) -> list[str]` (wenn angeboten)
Metadaten:
- `prefer_source_metadata = True` bedeutet: Plugin-Metadaten gehen vor TMDB, TMDB dient nur als Fallback.
ViewIt zeigt im UI nur die Features an, die ein Plugin tatsächlich liefert.
### Plugin-Struktur (empfohlen)
Eine Integration sollte typischerweise bieten:
- Konstante `BASE_URL`
- `search_titles()` mit Provider-Suche
- `seasons_for()` und `episodes_for()` mit HTML-Parsing
- `stream_link_for()` optional für direkte Playback-Links
- `metadata_for()` optional für Plot/Poster aus der Quelle
- Optional: `available_hosters_for()` oder Provider-spezifische Helfer
Als Startpunkt dient `addon/plugins/_template_plugin.py`.
### Community-Erweiterungen (Workflow)
1. Fork/Branch erstellen
2. Neue Datei unter `addon/plugins/` hinzufügen (z.B. `meinprovider_plugin.py`)
3. Klasse erstellen, die `BasisPlugin` implementiert
4. In Kodi testen (ZIP bauen, installieren)
5. PR öffnen
### Qualitätsrichtlinien
- Keine Netzwerkzugriffe im Import-Top-Level
- Netzwerkzugriffe nur in Methoden (z.B. `search_titles`)
- Fehler sauber abfangen und verständliche Fehlermeldungen liefern
- Kein globaler Zustand, der über Instanzen hinweg überrascht
- Provider-spezifische Parser in Helper-Funktionen kapseln
- Reproduzierbare Reihenfolge: `Plugin`-Alias nutzen oder Klassenname eindeutig halten
### Debugging & Logs
Hilfreiche Logs werden nach `userdata/addon_data/plugin.video.viewit/logs/` geschrieben.
Provider sollten URL-Logging optional halten (Settings).
### ZIP-Build
```
./scripts/build_kodi_zip.sh ./scripts/build_kodi_zip.sh
``` ```
Ergebnis: Das ZIP liegt anschließend unter `dist/plugin.video.viewit-<version>.zip`.
`dist/plugin.video.viewit-<version>.zip`

View File

@@ -1,44 +0,0 @@
# Release Flow (Main + Nightly)
This project uses two release channels:
- `nightly`: integration and test channel
- `main`: stable channel
## Rules
- Feature work goes to `nightly` only.
- Promote from `nightly` to `main` with `--squash` only.
- `main` version has no suffix (`0.1.60`).
- `nightly` version uses `-nightly` and is always at least one patch higher than `main` (`0.1.61-nightly`).
- Keep changelogs split:
- `CHANGELOG-NIGHTLY.md`
- `CHANGELOG.md`
## Nightly publish
1) Finish changes on `nightly`.
2) Bump addon version in `addon/addon.xml` to `X.Y.Z-nightly`.
3) Build and publish nightly repo artifacts.
4) Push `nightly`.
## Promote nightly to main
```bash
git checkout main
git pull origin main
git merge --squash nightly
git commit -m "release: X.Y.Z"
```
Then:
1) Set `addon/addon.xml` version to `X.Y.Z` (without `-nightly`).
2) Build and publish main repo artifacts.
3) Push `main`.
4) Optional tag: `vX.Y.Z`.
## Local ZIPs (separated)
- Main ZIP output: `dist/local_zips/main/`
- Nightly ZIP output: `dist/local_zips/nightly/`

Binary file not shown.

View File

@@ -21,20 +21,8 @@ fi
mkdir -p "${REPO_DIR}" mkdir -p "${REPO_DIR}"
read -r ADDON_ID ADDON_VERSION < <(python3 - "${PLUGIN_ADDON_XML}" <<'PY'
import sys
import xml.etree.ElementTree as ET
root = ET.parse(sys.argv[1]).getroot()
print(root.attrib.get("id", "plugin.video.viewit"), root.attrib.get("version", "0.0.0"))
PY
)
PLUGIN_ZIP="$("${ROOT_DIR}/scripts/build_kodi_zip.sh")" PLUGIN_ZIP="$("${ROOT_DIR}/scripts/build_kodi_zip.sh")"
PLUGIN_ZIP_NAME="$(basename "${PLUGIN_ZIP}")" cp -f "${PLUGIN_ZIP}" "${REPO_DIR}/"
PLUGIN_ADDON_DIR_IN_REPO="${REPO_DIR}/${ADDON_ID}"
mkdir -p "${PLUGIN_ADDON_DIR_IN_REPO}"
cp -f "${PLUGIN_ZIP}" "${PLUGIN_ADDON_DIR_IN_REPO}/${PLUGIN_ZIP_NAME}"
read -r REPO_ADDON_ID REPO_ADDON_VERSION < <(python3 - "${REPO_ADDON_XML}" <<'PY' read -r REPO_ADDON_ID REPO_ADDON_VERSION < <(python3 - "${REPO_ADDON_XML}" <<'PY'
import sys import sys
@@ -86,9 +74,6 @@ REPO_ZIP_NAME="${REPO_ADDON_ID}-${REPO_ADDON_VERSION}.zip"
REPO_ZIP_PATH="${REPO_DIR}/${REPO_ZIP_NAME}" REPO_ZIP_PATH="${REPO_DIR}/${REPO_ZIP_NAME}"
rm -f "${REPO_ZIP_PATH}" rm -f "${REPO_ZIP_PATH}"
python3 "${ROOT_DIR}/scripts/zip_deterministic.py" "${REPO_ZIP_PATH}" "${TMP_REPO_ADDON_DIR}" >/dev/null python3 "${ROOT_DIR}/scripts/zip_deterministic.py" "${REPO_ZIP_PATH}" "${TMP_REPO_ADDON_DIR}" >/dev/null
REPO_ADDON_DIR_IN_REPO="${REPO_DIR}/${REPO_ADDON_ID}"
mkdir -p "${REPO_ADDON_DIR_IN_REPO}"
cp -f "${REPO_ZIP_PATH}" "${REPO_ADDON_DIR_IN_REPO}/${REPO_ZIP_NAME}"
python3 - "${PLUGIN_ADDON_XML}" "${TMP_REPO_ADDON_DIR}/addon.xml" "${REPO_DIR}/addons.xml" <<'PY' python3 - "${PLUGIN_ADDON_XML}" "${TMP_REPO_ADDON_DIR}/addon.xml" "${REPO_DIR}/addons.xml" <<'PY'
import sys import sys
@@ -122,5 +107,4 @@ echo "Repo built:"
echo " ${REPO_DIR}/addons.xml" echo " ${REPO_DIR}/addons.xml"
echo " ${REPO_DIR}/addons.xml.md5" echo " ${REPO_DIR}/addons.xml.md5"
echo " ${REPO_ZIP_PATH}" echo " ${REPO_ZIP_PATH}"
echo " ${PLUGIN_ADDON_DIR_IN_REPO}/${PLUGIN_ZIP_NAME}" echo " ${REPO_DIR}/$(basename "${PLUGIN_ZIP}")"
echo " ${REPO_ADDON_DIR_IN_REPO}/${REPO_ZIP_NAME}"