Compare commits

...

31 Commits

Author SHA1 Message Date
d414fac022 Nightly: refactor readability, progress callbacks, and resource handling 2026-02-23 16:47:00 +01:00
7a330c9bc0 repo: publish kodi zips in addon-id subfolders 2026-02-19 20:11:59 +01:00
f8d180bcb5 chore: remove tracked __pycache__ and .pyc files 2026-02-19 14:57:41 +01:00
d71adcfac7 ui: make user-visible texts clearer and more human 2026-02-19 14:55:58 +01:00
81750ad148 docs: rewrite README and docs to concise ASCII style 2026-02-19 14:22:24 +01:00
4409f9432c nightly: playback fast-path, windows asyncio fix, v0.1.56 2026-02-19 14:10:09 +01:00
307df97d74 serienstream: source metadata for seasons/episodes 2026-02-08 23:13:24 +01:00
537f0e23e1 nightly: per-plugin metadata source option 2026-02-08 22:33:07 +01:00
ed1f59d3f2 Nightly: fix Einschalten base URL default 2026-02-07 17:40:31 +01:00
a37c45e2ef Nightly: bump version and refresh snapshots 2026-02-07 17:36:33 +01:00
7f5924b850 Nightly: snapshot harness and cache ignore 2026-02-07 17:33:45 +01:00
b370afe167 Nightly: reproducible zips and plugin manifest 2026-02-07 17:28:49 +01:00
09d2fc850d Nightly: deterministic plugin loading and docs refresh 2026-02-07 17:23:29 +01:00
6ce1bf71c1 Add Doku-Streams plugin and prefer source metadata 2026-02-07 16:11:48 +01:00
c7d848385f Add Filmpalast series catalog browsing 2026-02-06 12:35:22 +01:00
280a82f08b Add Filmpalast A-Z browsing and document Gitea release upload 2026-02-06 12:29:12 +01:00
9aedbee083 Add configurable update source and update-only version display 2026-02-05 13:15:58 +01:00
4c3f90233d Stop tracking local coverage and pycache artifacts 2026-02-04 15:34:48 +01:00
9e15212a66 Add local Kodi repo tooling and repository addon 2026-02-04 15:13:00 +01:00
951e99cb4c Add Filmpalast genre browsing and paged genre titles 2026-02-02 23:13:23 +01:00
4f7b0eba0c Fix Filmpalast resolver handoff for movie playback 2026-02-02 22:28:34 +01:00
ae3cff7528 Add Filmpalast plugin search flow and bump to 0.1.50 2026-02-02 22:16:43 +01:00
db61bb67ba Refactor code structure for improved readability and maintainability 2026-02-02 15:40:52 +01:00
4521d9fb1d Add initial pytest configuration in settings.json 2026-02-02 15:16:00 +01:00
ca362f80fe Integrate pytest coverage configuration 2026-02-01 23:32:30 +01:00
372d443cb2 Show search progress per plugin during global search 2026-02-01 23:26:12 +01:00
1e3c6ffdf6 Refine title search to whole-word matching and bump 0.1.49 2026-02-01 23:14:10 +01:00
28da41123f Improve Serienstream genre loading and bump to 0.1.48 2026-02-01 22:41:48 +01:00
dbcd9598a9 Speed up aniworld season and episode loading 2026-02-01 21:47:54 +01:00
09c6a32043 Pass series URLs through navigation for faster serienstream 2026-02-01 21:37:50 +01:00
3689aedd23 Speed up serienstream season and episode loading 2026-02-01 21:19:28 +01:00
47 changed files with 5841 additions and 957 deletions

14
.gitignore vendored
View File

@@ -6,3 +6,17 @@
# Build outputs # Build outputs
/dist/ /dist/
# Local tests (not committed)
/tests/
/TESTING/
/.pytest_cache/
/pytest.ini
# Python artifacts
__pycache__/
*.pyc
.coverage
# Plugin runtime caches
/addon/plugins/*_cache.json

7
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,7 @@
{
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}

View File

@@ -2,23 +2,37 @@
<img src="addon/resources/logo.png" alt="ViewIT Logo" width="220" /> <img src="addon/resources/logo.png" alt="ViewIT Logo" width="220" />
ViewIT ist ein KodiAddon zum Durchsuchen und Abspielen von Inhalten der unterstützten Anbieter. ViewIT ist ein Kodi Addon.
Es durchsucht Provider und startet Streams.
## Projektstruktur ## Projektstruktur
- `addon/` KodiAddon Quellcode - `addon/` Kodi Addon Quellcode
- `scripts/` BuildScripts (arbeiten mit `addon/` + `dist/`) - `scripts/` Build Scripts
- `dist/` BuildAusgaben (ZIPs) - `dist/` Build Ausgaben
- `docs/`, `tests/` - `docs/` Doku
- `tests/` Tests
## Build & Release ## Build und Release
- AddonOrdner bauen: `./scripts/build_install_addon.sh``dist/<addon_id>/` - Addon Ordner bauen: `./scripts/build_install_addon.sh`
- KodiZIP bauen: `./scripts/build_kodi_zip.sh``dist/<addon_id>-<version>.zip` - Kodi ZIP bauen: `./scripts/build_kodi_zip.sh`
- AddonVersion in `addon/addon.xml` - Version pflegen: `addon/addon.xml`
- Reproduzierbares ZIP: `SOURCE_DATE_EPOCH` optional setzen
## Entwicklung (kurz) ## Lokales Kodi Repository
- Hauptlogik: `addon/default.py` - Repository bauen: `./scripts/build_local_kodi_repo.sh`
- Repository starten: `./scripts/serve_local_kodi_repo.sh`
- Standard URL: `http://127.0.0.1:8080/repo/addons.xml`
- Eigene URL beim Build: `REPO_BASE_URL=http://<host>:<port>/repo ./scripts/build_local_kodi_repo.sh`
## Entwicklung
- Router: `addon/default.py`
- Plugins: `addon/plugins/*_plugin.py` - Plugins: `addon/plugins/*_plugin.py`
- Einstellungen: `addon/resources/settings.xml` - Settings: `addon/resources/settings.xml`
## Tests
- Dev Pakete installieren: `./.venv/bin/pip install -r requirements-dev.txt`
- Tests starten: `./.venv/bin/pytest`
- XML Report: `./.venv/bin/pytest --cov-report=xml`
## Dokumentation ## Dokumentation
Siehe `docs/`. Siehe `docs/`.

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version='1.0' encoding='utf-8'?>
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.47" provider-name="ViewIt"> <addon id="plugin.video.viewit" name="ViewIt" version="0.1.57" provider-name="ViewIt">
<requires> <requires>
<import addon="xbmc.python" version="3.0.0" /> <import addon="xbmc.python" version="3.0.0" />
<import addon="script.module.requests" /> <import addon="script.module.requests" />
@@ -10,12 +10,12 @@
<provides>video</provides> <provides>video</provides>
</extension> </extension>
<extension point="xbmc.addon.metadata"> <extension point="xbmc.addon.metadata">
<summary>ViewIt Kodi Plugin</summary> <summary>Suche und Wiedergabe fuer mehrere Quellen</summary>
<description>Streaming-Addon für Streamingseiten: Suche, Staffeln/Episoden und Wiedergabe.</description> <description>Findet Titel in unterstuetzten Quellen und startet Filme oder Episoden direkt in Kodi.</description>
<assets> <assets>
<icon>icon.png</icon> <icon>icon.png</icon>
</assets> </assets>
<license>GPL-3.0-or-later</license> <license>GPL-3.0-or-later</license>
<platform>all</platform> <platform>all</platform>
</extension> </extension>
</addon> </addon>

File diff suppressed because it is too large Load Diff

View File

@@ -32,3 +32,12 @@ def get_requests_session(key: str, *, headers: Optional[dict[str, str]] = None):
pass pass
return session return session
def close_all_sessions() -> None:
"""Close and clear all pooled sessions."""
for session in list(_SESSIONS.values()):
try:
session.close()
except Exception:
pass
_SESSIONS.clear()

93
addon/metadata_utils.py Normal file
View File

@@ -0,0 +1,93 @@
from __future__ import annotations
import re
from plugin_interface import BasisPlugin
from tmdb import TmdbCastMember
METADATA_MODE_AUTO = 0
METADATA_MODE_SOURCE = 1
METADATA_MODE_TMDB = 2
METADATA_MODE_MIX = 3
def metadata_setting_id(plugin_name: str) -> str:
safe = re.sub(r"[^a-z0-9]+", "_", (plugin_name or "").strip().casefold()).strip("_")
return f"{safe}_metadata_source" if safe else "metadata_source"
def plugin_supports_metadata(plugin: BasisPlugin) -> bool:
try:
return plugin.__class__.metadata_for is not BasisPlugin.metadata_for
except Exception:
return False
def metadata_policy(
plugin_name: str,
plugin: BasisPlugin,
*,
allow_tmdb: bool,
get_setting_int=None,
) -> tuple[bool, bool, bool]:
if not callable(get_setting_int):
return plugin_supports_metadata(plugin), allow_tmdb, bool(getattr(plugin, "prefer_source_metadata", False))
mode = get_setting_int(metadata_setting_id(plugin_name), default=METADATA_MODE_AUTO)
supports_source = plugin_supports_metadata(plugin)
if mode == METADATA_MODE_SOURCE:
return supports_source, False, True
if mode == METADATA_MODE_TMDB:
return False, allow_tmdb, False
if mode == METADATA_MODE_MIX:
return supports_source, allow_tmdb, True
prefer_source = bool(getattr(plugin, "prefer_source_metadata", False))
return supports_source, allow_tmdb, prefer_source
def collect_plugin_metadata(
plugin: BasisPlugin,
titles: list[str],
) -> dict[str, tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None]]:
getter = getattr(plugin, "metadata_for", None)
if not callable(getter):
return {}
collected: dict[str, tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None]] = {}
for title in titles:
try:
labels, art, cast = getter(title)
except Exception:
continue
if isinstance(labels, dict) or isinstance(art, dict) or cast:
label_map = {str(k): str(v) for k, v in dict(labels or {}).items() if v}
art_map = {str(k): str(v) for k, v in dict(art or {}).items() if v}
collected[title] = (label_map, art_map, cast if isinstance(cast, list) else None)
return collected
def needs_tmdb(labels: dict[str, str], art: dict[str, str], *, want_plot: bool, want_art: bool) -> bool:
if want_plot and not labels.get("plot"):
return True
if want_art and not (art.get("thumb") or art.get("poster") or art.get("fanart") or art.get("landscape")):
return True
return False
def merge_metadata(
title: str,
tmdb_labels: dict[str, str] | None,
tmdb_art: dict[str, str] | None,
tmdb_cast: list[TmdbCastMember] | None,
plugin_meta: tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None] | None,
) -> tuple[dict[str, str], dict[str, str], list[TmdbCastMember] | None]:
labels = dict(tmdb_labels or {})
art = dict(tmdb_art or {})
cast = tmdb_cast
if plugin_meta is not None:
meta_labels, meta_art, meta_cast = plugin_meta
labels.update({k: str(v) for k, v in dict(meta_labels or {}).items() if v})
art.update({k: str(v) for k, v in dict(meta_art or {}).items() if v})
if meta_cast is not None:
cast = meta_cast
if "title" not in labels:
labels["title"] = title
return labels, art, cast

View File

@@ -4,16 +4,22 @@
from __future__ import annotations from __future__ import annotations
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import List, Optional, Set from typing import Any, Callable, Dict, List, Optional, Set, Tuple
class BasisPlugin(ABC): class BasisPlugin(ABC):
"""Abstrakte Basisklasse fuer alle Integrationen.""" """Abstrakte Basisklasse fuer alle Integrationen."""
name: str name: str
version: str = "0.0.0"
prefer_source_metadata: bool = False
@abstractmethod @abstractmethod
async def search_titles(self, query: str) -> List[str]: async def search_titles(
self,
query: str,
progress_callback: Optional[Callable[[str, Optional[int]], Any]] = None,
) -> List[str]:
"""Liefert eine Liste aller Treffer fuer die Suche.""" """Liefert eine Liste aller Treffer fuer die Suche."""
@abstractmethod @abstractmethod
@@ -28,6 +34,10 @@ class BasisPlugin(ABC):
"""Optional: Liefert den Stream-Link fuer eine konkrete Folge.""" """Optional: Liefert den Stream-Link fuer eine konkrete Folge."""
return None return None
def metadata_for(self, title: str) -> Tuple[Dict[str, str], Dict[str, str], Optional[List[Any]]]:
"""Optional: Liefert Info-Labels, Art und Cast fuer einen Titel."""
return {}, {}, None
def resolve_stream_link(self, link: str) -> Optional[str]: def resolve_stream_link(self, link: str) -> Optional[str]:
"""Optional: Folgt einem Stream-Link und liefert die finale URL.""" """Optional: Folgt einem Stream-Link und liefert die finale URL."""
return None return None

View File

@@ -9,7 +9,7 @@ Zum Verwenden:
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, List, Optional, TypeAlias from typing import TYPE_CHECKING, Any, Callable, List, Optional
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
import requests import requests
@@ -34,8 +34,8 @@ if TYPE_CHECKING: # pragma: no cover
from requests import Session as RequestsSession from requests import Session as RequestsSession
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found] from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
else: # pragma: no cover else: # pragma: no cover
RequestsSession: TypeAlias = Any RequestsSession = Any
BeautifulSoupT: TypeAlias = Any BeautifulSoupT = Any
ADDON_ID = "plugin.video.viewit" ADDON_ID = "plugin.video.viewit"
@@ -88,9 +88,13 @@ class TemplatePlugin(BasisPlugin):
self._session = session self._session = session
return self._session return self._session
async def search_titles(self, query: str) -> List[str]: async def search_titles(
self,
query: str,
progress_callback: Optional[Callable[[str, Optional[int]], Any]] = None,
) -> List[str]:
"""TODO: Suche auf der Zielseite implementieren.""" """TODO: Suche auf der Zielseite implementieren."""
_ = query _ = (query, progress_callback)
return [] return []
def seasons_for(self, title: str) -> List[str]: def seasons_for(self, title: str) -> List[str]:

View File

@@ -8,8 +8,13 @@ Dieses Plugin ist weitgehend kompatibel zur Serienstream-Integration:
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from html import unescape
import hashlib
import json
import re import re
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, TypeAlias import time
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple
from urllib.parse import quote
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
import requests import requests
@@ -25,8 +30,10 @@ else:
try: # pragma: no cover - optional Kodi helpers try: # pragma: no cover - optional Kodi helpers
import xbmcaddon # type: ignore[import-not-found] import xbmcaddon # type: ignore[import-not-found]
import xbmcgui # type: ignore[import-not-found]
except ImportError: # pragma: no cover - allow running outside Kodi except ImportError: # pragma: no cover - allow running outside Kodi
xbmcaddon = None xbmcaddon = None
xbmcgui = None
from plugin_interface import BasisPlugin from plugin_interface import BasisPlugin
from plugin_helpers import dump_response_html, get_setting_bool, get_setting_string, log_error, log_url, notify_url from plugin_helpers import dump_response_html, get_setting_bool, get_setting_string, log_error, log_url, notify_url
@@ -37,8 +44,8 @@ if TYPE_CHECKING: # pragma: no cover
from requests import Session as RequestsSession from requests import Session as RequestsSession
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found] from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
else: # pragma: no cover else: # pragma: no cover
RequestsSession: TypeAlias = Any RequestsSession = Any
BeautifulSoupT: TypeAlias = Any BeautifulSoupT = Any
SETTING_BASE_URL = "aniworld_base_url" SETTING_BASE_URL = "aniworld_base_url"
@@ -60,6 +67,19 @@ HEADERS = {
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8", "Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
"Connection": "keep-alive", "Connection": "keep-alive",
} }
SESSION_CACHE_TTL_SECONDS = 300
SESSION_CACHE_PREFIX = "viewit.aniworld"
SESSION_CACHE_MAX_TITLE_URLS = 800
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
@dataclass @dataclass
@@ -117,7 +137,7 @@ def _latest_episodes_url() -> str:
def _search_url(query: str) -> str: def _search_url(query: str) -> str:
return f"{_get_base_url()}/search?q={query}" return f"{_get_base_url()}/search?q={quote((query or '').strip())}"
def _search_api_url() -> str: def _search_api_url() -> str:
@@ -128,6 +148,67 @@ def _absolute_url(href: str) -> str:
return f"{_get_base_url()}{href}" if href.startswith("/") else href return f"{_get_base_url()}{href}" if href.startswith("/") else href
def _session_window() -> Any:
if xbmcgui is None:
return None
try:
return xbmcgui.Window(10000)
except Exception:
return None
def _session_cache_key(name: str) -> str:
base_hash = hashlib.sha1(_get_base_url().encode("utf-8")).hexdigest()[:12]
return f"{SESSION_CACHE_PREFIX}.{base_hash}.{name}"
def _session_cache_get(name: str) -> Any:
window = _session_window()
if window is None:
return None
raw = ""
try:
raw = window.getProperty(_session_cache_key(name)) or ""
except Exception:
return None
if not raw:
return None
try:
payload = json.loads(raw)
except Exception:
return None
if not isinstance(payload, dict):
return None
expires_at = payload.get("expires_at")
data = payload.get("data")
try:
if float(expires_at or 0) <= time.time():
return None
except Exception:
return None
return data
def _session_cache_set(name: str, data: Any, *, ttl_seconds: int = SESSION_CACHE_TTL_SECONDS) -> None:
window = _session_window()
if window is None:
return
payload = {
"expires_at": float(time.time() + max(1, int(ttl_seconds))),
"data": data,
}
try:
raw = json.dumps(payload, ensure_ascii=False, separators=(",", ":"))
except Exception:
return
if len(raw) > 240_000:
return
try:
window.setProperty(_session_cache_key(name), raw)
except Exception:
return
def _log_url(url: str, *, kind: str = "VISIT") -> None: def _log_url(url: str, *, kind: str = "VISIT") -> None:
log_url( log_url(
ADDON_ID, ADDON_ID,
@@ -192,10 +273,8 @@ def _matches_query(query: str, *, title: str) -> bool:
normalized_query = _normalize_search_text(query) normalized_query = _normalize_search_text(query)
if not normalized_query: if not normalized_query:
return False return False
haystack = _normalize_search_text(title) haystack = f" {_normalize_search_text(title)} "
if not haystack: return f" {normalized_query} " in haystack
return False
return normalized_query in haystack
def _ensure_requests() -> None: def _ensure_requests() -> None:
@@ -221,53 +300,108 @@ def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> Beautif
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = session or get_requests_session("aniworld", headers=HEADERS) sess = session or get_requests_session("aniworld", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
raise raise
if response.url and response.url != url: try:
_log_url(response.url, kind="REDIRECT") final_url = (response.url or url) if response is not None else url
_log_response_html(url, response.text) body = (response.text or "") if response is not None else ""
if _looks_like_cloudflare_challenge(response.text): if final_url != url:
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.") _log_url(final_url, kind="REDIRECT")
return BeautifulSoup(response.text, "html.parser") _log_response_html(url, body)
if _looks_like_cloudflare_challenge(body):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_soup_simple(url: str) -> BeautifulSoupT: def _get_html_simple(url: str) -> str:
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = get_requests_session("aniworld", headers=HEADERS) sess = get_requests_session("aniworld", headers=HEADERS)
response = None
try: try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
raise raise
if response.url and response.url != url: try:
_log_url(response.url, kind="REDIRECT") final_url = (response.url or url) if response is not None else url
_log_response_html(url, response.text) body = (response.text or "") if response is not None else ""
if _looks_like_cloudflare_challenge(response.text): if final_url != url:
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.") _log_url(final_url, kind="REDIRECT")
return BeautifulSoup(response.text, "html.parser") _log_response_html(url, body)
if _looks_like_cloudflare_challenge(body):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
return body
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_soup_simple(url: str) -> BeautifulSoupT:
body = _get_html_simple(url)
return BeautifulSoup(body, "html.parser")
def _extract_genre_names_from_html(body: str) -> List[str]:
names: List[str] = []
seen: set[str] = set()
pattern = re.compile(
r"<div[^>]*class=[\"'][^\"']*seriesGenreList[^\"']*[\"'][^>]*>.*?<h3[^>]*>(.*?)</h3>",
re.IGNORECASE | re.DOTALL,
)
for match in pattern.finditer(body or ""):
text = re.sub(r"<[^>]+>", " ", match.group(1) or "")
text = unescape(re.sub(r"\s+", " ", text)).strip()
if not text:
continue
key = text.casefold()
if key in seen:
continue
seen.add(key)
names.append(text)
return names
def _post_json(url: str, *, payload: Dict[str, str], session: Optional[RequestsSession] = None) -> Any: def _post_json(url: str, *, payload: Dict[str, str], session: Optional[RequestsSession] = None) -> Any:
_ensure_requests() _ensure_requests()
_log_visit(url) _log_visit(url)
sess = session or get_requests_session("aniworld", headers=HEADERS) sess = session or get_requests_session("aniworld", headers=HEADERS)
response = sess.post(url, data=payload, headers=HEADERS, timeout=DEFAULT_TIMEOUT) response = None
response.raise_for_status()
if response.url and response.url != url:
_log_url(response.url, kind="REDIRECT")
_log_response_html(url, response.text)
if _looks_like_cloudflare_challenge(response.text):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
try: try:
return response.json() response = sess.post(url, data=payload, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
except Exception: response.raise_for_status()
return None final_url = (response.url or url) if response is not None else url
body = (response.text or "") if response is not None else ""
if final_url != url:
_log_url(final_url, kind="REDIRECT")
_log_response_html(url, body)
if _looks_like_cloudflare_challenge(body):
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
try:
return response.json()
except Exception:
return None
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _extract_canonical_url(soup: BeautifulSoupT, fallback: str) -> str: def _extract_canonical_url(soup: BeautifulSoupT, fallback: str) -> str:
@@ -417,15 +551,16 @@ def _extract_latest_episodes(soup: BeautifulSoupT) -> List[LatestEpisode]:
return episodes return episodes
def scrape_anime_detail(anime_identifier: str, max_seasons: Optional[int] = None) -> List[SeasonInfo]: def scrape_anime_detail(
anime_identifier: str,
max_seasons: Optional[int] = None,
*,
load_episodes: bool = True,
) -> List[SeasonInfo]:
_ensure_requests() _ensure_requests()
anime_url = _series_root_url(_absolute_url(anime_identifier)) anime_url = _series_root_url(_absolute_url(anime_identifier))
_log_url(anime_url, kind="ANIME") _log_url(anime_url, kind="ANIME")
session = get_requests_session("aniworld", headers=HEADERS) session = get_requests_session("aniworld", headers=HEADERS)
try:
_get_soup(_get_base_url(), session=session)
except Exception:
pass
soup = _get_soup(anime_url, session=session) soup = _get_soup(anime_url, session=session)
base_anime_url = _series_root_url(_extract_canonical_url(soup, anime_url)) base_anime_url = _series_root_url(_extract_canonical_url(soup, anime_url))
@@ -445,8 +580,10 @@ def scrape_anime_detail(anime_identifier: str, max_seasons: Optional[int] = None
seasons: List[SeasonInfo] = [] seasons: List[SeasonInfo] = []
for number, url in season_links: for number, url in season_links:
season_soup = _get_soup(url, session=session) episodes: List[EpisodeInfo] = []
episodes = _extract_episodes(season_soup) if load_episodes:
season_soup = _get_soup(url, session=session)
episodes = _extract_episodes(season_soup)
seasons.append(SeasonInfo(number=number, url=url, episodes=episodes)) seasons.append(SeasonInfo(number=number, url=url, episodes=episodes))
seasons.sort(key=lambda s: s.number) seasons.sort(key=lambda s: s.number)
return seasons return seasons
@@ -458,10 +595,18 @@ def resolve_redirect(target_url: str) -> Optional[str]:
_log_visit(normalized_url) _log_visit(normalized_url)
session = get_requests_session("aniworld", headers=HEADERS) session = get_requests_session("aniworld", headers=HEADERS)
_get_soup(_get_base_url(), session=session) _get_soup(_get_base_url(), session=session)
response = session.get(normalized_url, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True) response = None
if response.url: try:
_log_url(response.url, kind="RESOLVED") response = session.get(normalized_url, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True)
return response.url if response.url else None if response.url:
_log_url(response.url, kind="RESOLVED")
return response.url if response.url else None
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def fetch_episode_hoster_names(episode_url: str) -> List[str]: def fetch_episode_hoster_names(episode_url: str) -> List[str]:
@@ -532,11 +677,12 @@ def fetch_episode_stream_link(
return resolved return resolved
def search_animes(query: str) -> List[SeriesResult]: def search_animes(query: str, *, progress_callback: ProgressCallback = None) -> List[SeriesResult]:
_ensure_requests() _ensure_requests()
query = (query or "").strip() query = (query or "").strip()
if not query: if not query:
return [] return []
_emit_progress(progress_callback, "AniWorld API-Suche", 15)
session = get_requests_session("aniworld", headers=HEADERS) session = get_requests_session("aniworld", headers=HEADERS)
try: try:
session.get(_get_base_url(), headers=HEADERS, timeout=DEFAULT_TIMEOUT) session.get(_get_base_url(), headers=HEADERS, timeout=DEFAULT_TIMEOUT)
@@ -546,7 +692,9 @@ def search_animes(query: str) -> List[SeriesResult]:
results: List[SeriesResult] = [] results: List[SeriesResult] = []
seen: set[str] = set() seen: set[str] = set()
if isinstance(data, list): if isinstance(data, list):
for entry in data: for idx, entry in enumerate(data, start=1):
if idx == 1 or idx % 50 == 0:
_emit_progress(progress_callback, f"API auswerten {idx}/{len(data)}", 35)
if not isinstance(entry, dict): if not isinstance(entry, dict):
continue continue
title = _strip_html((entry.get("title") or "").strip()) title = _strip_html((entry.get("title") or "").strip())
@@ -568,10 +716,16 @@ def search_animes(query: str) -> List[SeriesResult]:
seen.add(key) seen.add(key)
description = (entry.get("description") or "").strip() description = (entry.get("description") or "").strip()
results.append(SeriesResult(title=title, description=description, url=url)) results.append(SeriesResult(title=title, description=description, url=url))
_emit_progress(progress_callback, f"API-Treffer: {len(results)}", 85)
return results return results
soup = _get_soup_simple(_search_url(requests.utils.quote(query))) _emit_progress(progress_callback, "HTML-Suche (Fallback)", 55)
for anchor in soup.select("a[href^='/anime/stream/'][href]"): soup = _get_soup_simple(_search_url(query))
anchors = soup.select("a[href^='/anime/stream/'][href]")
total_anchors = max(1, len(anchors))
for idx, anchor in enumerate(anchors, start=1):
if idx == 1 or idx % 100 == 0:
_emit_progress(progress_callback, f"HTML auswerten {idx}/{total_anchors}", 70)
href = (anchor.get("href") or "").strip() href = (anchor.get("href") or "").strip()
if not href or "/staffel-" in href or "/episode-" in href: if not href or "/staffel-" in href or "/episode-" in href:
continue continue
@@ -589,15 +743,20 @@ def search_animes(query: str) -> List[SeriesResult]:
continue continue
seen.add(key) seen.add(key)
results.append(SeriesResult(title=title, description="", url=url)) results.append(SeriesResult(title=title, description="", url=url))
_emit_progress(progress_callback, f"HTML-Treffer: {len(results)}", 85)
return results return results
class AniworldPlugin(BasisPlugin): class AniworldPlugin(BasisPlugin):
name = "Aniworld" name = "Aniworld"
version = "1.0.0"
def __init__(self) -> None: def __init__(self) -> None:
self._anime_results: Dict[str, SeriesResult] = {} self._anime_results: Dict[str, SeriesResult] = {}
self._title_url_cache: Dict[str, str] = self._load_title_url_cache()
self._genre_names_cache: Optional[List[str]] = None
self._season_cache: Dict[str, List[SeasonInfo]] = {} self._season_cache: Dict[str, List[SeasonInfo]] = {}
self._season_links_cache: Dict[str, List[SeasonInfo]] = {}
self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {} self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {}
self._popular_cache: Optional[List[SeriesResult]] = None self._popular_cache: Optional[List[SeriesResult]] = None
self._genre_cache: Optional[Dict[str, List[SeriesResult]]] = None self._genre_cache: Optional[Dict[str, List[SeriesResult]]] = None
@@ -615,6 +774,132 @@ class AniworldPlugin(BasisPlugin):
if REQUESTS_IMPORT_ERROR: if REQUESTS_IMPORT_ERROR:
print(f"AniworldPlugin Importfehler: {REQUESTS_IMPORT_ERROR}") print(f"AniworldPlugin Importfehler: {REQUESTS_IMPORT_ERROR}")
def _load_title_url_cache(self) -> Dict[str, str]:
raw = _session_cache_get("title_urls")
if not isinstance(raw, dict):
return {}
result: Dict[str, str] = {}
for key, value in raw.items():
key_text = str(key or "").strip().casefold()
url_text = str(value or "").strip()
if not key_text or not url_text:
continue
result[key_text] = url_text
return result
def _save_title_url_cache(self) -> None:
if not self._title_url_cache:
return
while len(self._title_url_cache) > SESSION_CACHE_MAX_TITLE_URLS:
self._title_url_cache.pop(next(iter(self._title_url_cache)))
_session_cache_set("title_urls", self._title_url_cache)
def _remember_anime_result(
self,
title: str,
url: str,
description: str = "",
*,
persist: bool = True,
) -> bool:
title = (title or "").strip()
url = (url or "").strip()
if not title:
return False
changed = False
current = self._anime_results.get(title)
if current is None or (url and current.url != url) or (description and current.description != description):
self._anime_results[title] = SeriesResult(title=title, description=description, url=url)
changed = True
if url:
key = title.casefold()
if self._title_url_cache.get(key) != url:
self._title_url_cache[key] = url
changed = True
if changed and persist:
self._save_title_url_cache()
return changed
@staticmethod
def _season_links_cache_name(series_url: str) -> str:
digest = hashlib.sha1((series_url or "").encode("utf-8")).hexdigest()[:20]
return f"season_links.{digest}"
@staticmethod
def _season_episodes_cache_name(season_url: str) -> str:
digest = hashlib.sha1((season_url or "").encode("utf-8")).hexdigest()[:20]
return f"season_episodes.{digest}"
def _load_session_season_links(self, series_url: str) -> Optional[List[SeasonInfo]]:
raw = _session_cache_get(self._season_links_cache_name(series_url))
if not isinstance(raw, list):
return None
seasons: List[SeasonInfo] = []
for item in raw:
if not isinstance(item, dict):
continue
try:
number = int(item.get("number"))
except Exception:
continue
url = str(item.get("url") or "").strip()
if number <= 0 or not url:
continue
seasons.append(SeasonInfo(number=number, url=url, episodes=[]))
if not seasons:
return None
seasons.sort(key=lambda s: s.number)
return seasons
def _save_session_season_links(self, series_url: str, seasons: List[SeasonInfo]) -> None:
payload = [{"number": int(season.number), "url": season.url} for season in seasons if season.url]
if payload:
_session_cache_set(self._season_links_cache_name(series_url), payload)
def _load_session_season_episodes(self, season_url: str) -> Optional[List[EpisodeInfo]]:
raw = _session_cache_get(self._season_episodes_cache_name(season_url))
if not isinstance(raw, list):
return None
episodes: List[EpisodeInfo] = []
for item in raw:
if not isinstance(item, dict):
continue
try:
number = int(item.get("number"))
except Exception:
continue
title = str(item.get("title") or "").strip()
original_title = str(item.get("original_title") or "").strip()
url = str(item.get("url") or "").strip()
if number <= 0:
continue
episodes.append(
EpisodeInfo(
number=number,
title=title or f"Episode {number}",
original_title=original_title,
url=url,
)
)
if not episodes:
return None
episodes.sort(key=lambda item: item.number)
return episodes
def _save_session_season_episodes(self, season_url: str, episodes: List[EpisodeInfo]) -> None:
payload = []
for item in episodes:
payload.append(
{
"number": int(item.number),
"title": item.title,
"original_title": item.original_title,
"url": item.url,
}
)
if payload:
_session_cache_set(self._season_episodes_cache_name(season_url), payload)
def capabilities(self) -> set[str]: def capabilities(self) -> set[str]:
return {"popular_series", "genres", "latest_episodes"} return {"popular_series", "genres", "latest_episodes"}
@@ -629,6 +914,12 @@ class AniworldPlugin(BasisPlugin):
wanted = title.casefold().strip() wanted = title.casefold().strip()
cached_url = self._title_url_cache.get(wanted, "")
if cached_url:
result = SeriesResult(title=title, description="", url=cached_url)
self._anime_results[title] = result
return result
for candidate in self._anime_results.values(): for candidate in self._anime_results.values():
if candidate.title and candidate.title.casefold().strip() == wanted: if candidate.title and candidate.title.casefold().strip() == wanted:
return candidate return candidate
@@ -636,7 +927,7 @@ class AniworldPlugin(BasisPlugin):
try: try:
for entry in self._ensure_popular(): for entry in self._ensure_popular():
if entry.title and entry.title.casefold().strip() == wanted: if entry.title and entry.title.casefold().strip() == wanted:
self._anime_results[entry.title] = entry self._remember_anime_result(entry.title, entry.url, entry.description)
return entry return entry
except Exception: except Exception:
pass pass
@@ -645,7 +936,7 @@ class AniworldPlugin(BasisPlugin):
for entries in self._ensure_genres().values(): for entries in self._ensure_genres().values():
for entry in entries: for entry in entries:
if entry.title and entry.title.casefold().strip() == wanted: if entry.title and entry.title.casefold().strip() == wanted:
self._anime_results[entry.title] = entry self._remember_anime_result(entry.title, entry.url, entry.description)
return entry return entry
except Exception: except Exception:
pass pass
@@ -653,7 +944,7 @@ class AniworldPlugin(BasisPlugin):
try: try:
for entry in search_animes(title): for entry in search_animes(title):
if entry.title and entry.title.casefold().strip() == wanted: if entry.title and entry.title.casefold().strip() == wanted:
self._anime_results[entry.title] = entry self._remember_anime_result(entry.title, entry.url, entry.description)
return entry return entry
except Exception: except Exception:
pass pass
@@ -665,6 +956,7 @@ class AniworldPlugin(BasisPlugin):
return list(self._popular_cache) return list(self._popular_cache)
soup = _get_soup_simple(_popular_animes_url()) soup = _get_soup_simple(_popular_animes_url())
results: List[SeriesResult] = [] results: List[SeriesResult] = []
cache_dirty = False
seen: set[str] = set() seen: set[str] = set()
for anchor in soup.select("div.seriesListContainer a[href^='/anime/stream/']"): for anchor in soup.select("div.seriesListContainer a[href^='/anime/stream/']"):
href = (anchor.get("href") or "").strip() href = (anchor.get("href") or "").strip()
@@ -686,6 +978,9 @@ class AniworldPlugin(BasisPlugin):
continue continue
seen.add(key) seen.add(key)
results.append(SeriesResult(title=title, description=description, url=url)) results.append(SeriesResult(title=title, description=description, url=url))
cache_dirty = self._remember_anime_result(title, url, description, persist=False) or cache_dirty
if cache_dirty:
self._save_title_url_cache()
self._popular_cache = list(results) self._popular_cache = list(results)
return list(results) return list(results)
@@ -693,7 +988,11 @@ class AniworldPlugin(BasisPlugin):
if not self._requests_available: if not self._requests_available:
return [] return []
entries = self._ensure_popular() entries = self._ensure_popular()
self._anime_results.update({entry.title: entry for entry in entries if entry.title}) cache_dirty = False
for entry in entries:
cache_dirty = self._remember_anime_result(entry.title, entry.url, entry.description, persist=False) or cache_dirty
if cache_dirty:
self._save_title_url_cache()
return [entry.title for entry in entries if entry.title] return [entry.title for entry in entries if entry.title]
def latest_episodes(self, page: int = 1) -> List[LatestEpisode]: def latest_episodes(self, page: int = 1) -> List[LatestEpisode]:
@@ -723,6 +1022,7 @@ class AniworldPlugin(BasisPlugin):
return {key: list(value) for key, value in self._genre_cache.items()} return {key: list(value) for key, value in self._genre_cache.items()}
soup = _get_soup_simple(_genres_url()) soup = _get_soup_simple(_genres_url())
results: Dict[str, List[SeriesResult]] = {} results: Dict[str, List[SeriesResult]] = {}
cache_dirty = False
genre_blocks = soup.select("#seriesContainer div.genre") genre_blocks = soup.select("#seriesContainer div.genre")
if not genre_blocks: if not genre_blocks:
genre_blocks = soup.select("div.genre") genre_blocks = soup.select("div.genre")
@@ -748,9 +1048,14 @@ class AniworldPlugin(BasisPlugin):
continue continue
seen.add(key) seen.add(key)
entries.append(SeriesResult(title=title, description="", url=url)) entries.append(SeriesResult(title=title, description="", url=url))
cache_dirty = self._remember_anime_result(title, url, persist=False) or cache_dirty
if entries: if entries:
results[genre_name] = entries results[genre_name] = entries
if cache_dirty:
self._save_title_url_cache()
self._genre_cache = {key: list(value) for key, value in results.items()} self._genre_cache = {key: list(value) for key, value in results.items()}
self._genre_names_cache = sorted(self._genre_cache.keys(), key=str.casefold)
_session_cache_set("genres", self._genre_names_cache)
# Für spätere Auflösung (Seasons/Episoden) die Titel->URL Zuordnung auffüllen. # Für spätere Auflösung (Seasons/Episoden) die Titel->URL Zuordnung auffüllen.
for entries in results.values(): for entries in results.values():
for entry in entries: for entry in entries:
@@ -760,11 +1065,31 @@ class AniworldPlugin(BasisPlugin):
self._anime_results[entry.title] = entry self._anime_results[entry.title] = entry
return {key: list(value) for key, value in results.items()} return {key: list(value) for key, value in results.items()}
def _ensure_genre_names(self) -> List[str]:
if self._genre_names_cache is not None:
return list(self._genre_names_cache)
cached = _session_cache_get("genres")
if isinstance(cached, list):
names = [str(value).strip() for value in cached if str(value).strip()]
if names:
self._genre_names_cache = sorted(set(names), key=str.casefold)
return list(self._genre_names_cache)
try:
body = _get_html_simple(_genres_url())
names = _extract_genre_names_from_html(body)
except Exception:
names = []
if not names:
mapping = self._ensure_genres()
names = list(mapping.keys())
self._genre_names_cache = sorted({name for name in names if name}, key=str.casefold)
_session_cache_set("genres", self._genre_names_cache)
return list(self._genre_names_cache)
def genres(self) -> List[str]: def genres(self) -> List[str]:
if not self._requests_available: if not self._requests_available:
return [] return []
genres = list(self._ensure_genres().keys()) return self._ensure_genre_names()
return [g for g in genres if g]
def titles_for_genre(self, genre: str) -> List[str]: def titles_for_genre(self, genre: str) -> List[str]:
genre = (genre or "").strip() genre = (genre or "").strip()
@@ -781,7 +1106,11 @@ class AniworldPlugin(BasisPlugin):
if not entries: if not entries:
return [] return []
# Zusätzlich sicherstellen, dass die Titel im Cache sind. # Zusätzlich sicherstellen, dass die Titel im Cache sind.
self._anime_results.update({entry.title: entry for entry in entries if entry.title and entry.title not in self._anime_results}) cache_dirty = False
for entry in entries:
cache_dirty = self._remember_anime_result(entry.title, entry.url, entry.description, persist=False) or cache_dirty
if cache_dirty:
self._save_title_url_cache()
return [entry.title for entry in entries if entry.title] return [entry.title for entry in entries if entry.title]
def _season_label(self, number: int) -> str: def _season_label(self, number: int) -> str:
@@ -801,67 +1130,136 @@ class AniworldPlugin(BasisPlugin):
cache_key = (title, season_label) cache_key = (title, season_label)
self._episode_label_cache[cache_key] = {self._episode_label(info): info for info in season_info.episodes} self._episode_label_cache[cache_key] = {self._episode_label(info): info for info in season_info.episodes}
def remember_series_url(self, title: str, series_url: str) -> None:
title = (title or "").strip()
series_url = (series_url or "").strip()
if not title or not series_url:
return
self._remember_anime_result(title, series_url)
def series_url_for_title(self, title: str) -> str:
title = (title or "").strip()
if not title:
return ""
direct = self._anime_results.get(title)
if direct and direct.url:
return direct.url
wanted = title.casefold().strip()
cached_url = self._title_url_cache.get(wanted, "")
if cached_url:
return cached_url
for candidate in self._anime_results.values():
if candidate.title and candidate.title.casefold().strip() == wanted and candidate.url:
return candidate.url
return ""
def _ensure_season_links(self, title: str) -> List[SeasonInfo]:
cached = self._season_links_cache.get(title)
if cached is not None:
return list(cached)
anime = self._find_series_by_title(title)
if not anime:
return []
session_links = self._load_session_season_links(anime.url)
if session_links:
self._season_links_cache[title] = list(session_links)
return list(session_links)
seasons = scrape_anime_detail(anime.url, load_episodes=False)
self._season_links_cache[title] = list(seasons)
self._save_session_season_links(anime.url, seasons)
return list(seasons)
def _ensure_season_episodes(self, title: str, season_number: int) -> Optional[SeasonInfo]:
seasons = self._season_cache.get(title) or []
for season in seasons:
if season.number == season_number and season.episodes:
return season
links = self._ensure_season_links(title)
target = next((season for season in links if season.number == season_number), None)
if not target:
return None
cached_episodes = self._load_session_season_episodes(target.url)
if cached_episodes:
season_info = SeasonInfo(number=target.number, url=target.url, episodes=list(cached_episodes))
updated = [season for season in seasons if season.number != season_number]
updated.append(season_info)
updated.sort(key=lambda item: item.number)
self._season_cache[title] = updated
return season_info
season_soup = _get_soup(target.url, session=get_requests_session("aniworld", headers=HEADERS))
season_info = SeasonInfo(number=target.number, url=target.url, episodes=_extract_episodes(season_soup))
updated = [season for season in seasons if season.number != season_number]
updated.append(season_info)
updated.sort(key=lambda item: item.number)
self._season_cache[title] = updated
self._save_session_season_episodes(target.url, season_info.episodes)
return season_info
def _lookup_episode(self, title: str, season_label: str, episode_label: str) -> Optional[EpisodeInfo]: def _lookup_episode(self, title: str, season_label: str, episode_label: str) -> Optional[EpisodeInfo]:
cache_key = (title, season_label) cache_key = (title, season_label)
cached = self._episode_label_cache.get(cache_key) cached = self._episode_label_cache.get(cache_key)
if cached: if cached:
return cached.get(episode_label) return cached.get(episode_label)
seasons = self._ensure_seasons(title)
number = self._parse_season_number(season_label) number = self._parse_season_number(season_label)
if number is None: if number is None:
return None return None
for season_info in seasons: season_info = self._ensure_season_episodes(title, number)
if season_info.number == number: if season_info:
self._cache_episode_labels(title, season_label, season_info) self._cache_episode_labels(title, season_label, season_info)
return self._episode_label_cache.get(cache_key, {}).get(episode_label) return self._episode_label_cache.get(cache_key, {}).get(episode_label)
return None return None
async def search_titles(self, query: str) -> List[str]: async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]:
query = (query or "").strip() query = (query or "").strip()
if not query: if not query:
self._anime_results.clear() self._anime_results.clear()
self._season_cache.clear() self._season_cache.clear()
self._season_links_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
self._popular_cache = None self._popular_cache = None
return [] return []
if not self._requests_available: if not self._requests_available:
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 nicht suchen.") raise RuntimeError("AniworldPlugin kann ohne requests/bs4 nicht suchen.")
try: try:
results = search_animes(query) _emit_progress(progress_callback, "AniWorld Suche startet", 10)
results = search_animes(query, progress_callback=progress_callback)
except Exception as exc: # pragma: no cover except Exception as exc: # pragma: no cover
self._anime_results.clear() self._anime_results.clear()
self._season_cache.clear() self._season_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
raise RuntimeError(f"AniWorld-Suche fehlgeschlagen: {exc}") from exc raise RuntimeError(f"AniWorld-Suche fehlgeschlagen: {exc}") from exc
self._anime_results = {result.title: result for result in results} self._anime_results = {}
cache_dirty = False
for result in results:
cache_dirty = self._remember_anime_result(result.title, result.url, result.description, persist=False) or cache_dirty
if cache_dirty:
self._save_title_url_cache()
self._season_cache.clear() self._season_cache.clear()
self._season_links_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
_emit_progress(progress_callback, f"Treffer aufbereitet: {len(results)}", 95)
return [result.title for result in results] return [result.title for result in results]
def _ensure_seasons(self, title: str) -> List[SeasonInfo]: def _ensure_seasons(self, title: str) -> List[SeasonInfo]:
if title in self._season_cache: if title in self._season_cache:
return self._season_cache[title] return self._season_cache[title]
anime = self._find_series_by_title(title) seasons = self._ensure_season_links(title)
if not anime:
return []
seasons = scrape_anime_detail(anime.url)
self._season_cache[title] = list(seasons) self._season_cache[title] = list(seasons)
return list(seasons) return list(seasons)
def seasons_for(self, title: str) -> List[str]: def seasons_for(self, title: str) -> List[str]:
seasons = self._ensure_seasons(title) seasons = self._ensure_seasons(title)
return [self._season_label(season.number) for season in seasons if season.episodes] return [self._season_label(season.number) for season in seasons]
def episodes_for(self, title: str, season: str) -> List[str]: def episodes_for(self, title: str, season: str) -> List[str]:
seasons = self._ensure_seasons(title)
number = self._parse_season_number(season) number = self._parse_season_number(season)
if number is None: if number is None:
return [] return []
for season_info in seasons: season_info = self._ensure_season_episodes(title, number)
if season_info.number == number: if season_info:
labels = [self._episode_label(info) for info in season_info.episodes] labels = [self._episode_label(info) for info in season_info.episodes]
self._cache_episode_labels(title, season, season_info) self._cache_episode_labels(title, season, season_info)
return labels return labels
return [] return []
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]: def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
@@ -875,6 +1273,18 @@ class AniworldPlugin(BasisPlugin):
_log_url(link, kind="FOUND") _log_url(link, kind="FOUND")
return link return link
def episode_url_for(self, title: str, season: str, episode: str) -> str:
cache_key = (title, season)
cached = self._episode_label_cache.get(cache_key)
if cached:
info = cached.get(episode)
if info and info.url:
return info.url
episode_info = self._lookup_episode(title, season, episode)
if episode_info and episode_info.url:
return episode_info.url
return ""
def available_hosters_for(self, title: str, season: str, episode: str) -> List[str]: def available_hosters_for(self, title: str, season: str, episode: str) -> List[str]:
if not self._requests_available: if not self._requests_available:
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 keine Hoster laden.") raise RuntimeError("AniworldPlugin kann ohne requests/bs4 keine Hoster laden.")

View File

@@ -0,0 +1,499 @@
"""Doku-Streams (doku-streams.com) Integration."""
from __future__ import annotations
from dataclasses import dataclass
import re
from urllib.parse import quote
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional
try: # pragma: no cover - optional dependency
import requests
from bs4 import BeautifulSoup # type: ignore[import-not-found]
except ImportError as exc: # pragma: no cover - optional dependency
requests = None
BeautifulSoup = None
REQUESTS_AVAILABLE = False
REQUESTS_IMPORT_ERROR = exc
else:
REQUESTS_AVAILABLE = True
REQUESTS_IMPORT_ERROR = None
from plugin_interface import BasisPlugin
from plugin_helpers import dump_response_html, get_setting_bool, get_setting_string, log_error, log_url, notify_url
from http_session_pool import get_requests_session
if TYPE_CHECKING: # pragma: no cover
from requests import Session as RequestsSession
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
else: # pragma: no cover
RequestsSession = Any
BeautifulSoupT = Any
ADDON_ID = "plugin.video.viewit"
SETTING_BASE_URL = "doku_streams_base_url"
DEFAULT_BASE_URL = "https://doku-streams.com"
MOST_VIEWED_PATH = "/meistgesehene/"
DEFAULT_TIMEOUT = 20
GLOBAL_SETTING_LOG_URLS = "debug_log_urls"
GLOBAL_SETTING_DUMP_HTML = "debug_dump_html"
GLOBAL_SETTING_SHOW_URL_INFO = "debug_show_url_info"
GLOBAL_SETTING_LOG_ERRORS = "debug_log_errors"
SETTING_LOG_URLS = "log_urls_dokustreams"
SETTING_DUMP_HTML = "dump_html_dokustreams"
SETTING_SHOW_URL_INFO = "show_url_info_dokustreams"
SETTING_LOG_ERRORS = "log_errors_dokustreams"
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
HEADERS = {
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
"Connection": "keep-alive",
}
@dataclass(frozen=True)
class SearchHit:
title: str
url: str
plot: str = ""
poster: str = ""
def _extract_last_page(soup: BeautifulSoupT) -> int:
max_page = 1
if not soup:
return max_page
for anchor in soup.select("nav.navigation a[href], nav.pagination a[href], a.page-numbers[href]"):
text = (anchor.get_text(" ", strip=True) or "").strip()
for candidate in (text, (anchor.get("href") or "").strip()):
for value in re.findall(r"/page/(\\d+)/", candidate):
try:
max_page = max(max_page, int(value))
except Exception:
continue
for value in re.findall(r"(\\d+)", candidate):
try:
max_page = max(max_page, int(value))
except Exception:
continue
return max_page
def _extract_summary_and_poster(article: BeautifulSoupT) -> tuple[str, str]:
summary = ""
if article:
summary_box = article.select_one("div.entry-summary")
if summary_box is not None:
for p in summary_box.find_all("p"):
text = (p.get_text(" ", strip=True) or "").strip()
if text:
summary = text
break
poster = ""
if article:
img = article.select_one("div.entry-thumb img")
if img is not None:
poster = (img.get("data-src") or "").strip() or (img.get("src") or "").strip()
if "lazy_placeholder" in poster and img.get("data-src"):
poster = (img.get("data-src") or "").strip()
poster = _absolute_url(poster)
return summary, poster
def _parse_listing_hits(soup: BeautifulSoupT, *, query: str = "") -> List[SearchHit]:
hits: List[SearchHit] = []
if not soup:
return hits
seen_titles: set[str] = set()
seen_urls: set[str] = set()
for article in soup.select("article[id^='post-']"):
anchor = article.select_one("h2.entry-title a[href]")
if anchor is None:
continue
href = (anchor.get("href") or "").strip()
title = (anchor.get_text(" ", strip=True) or "").strip()
if not href or not title:
continue
if query and not _matches_query(query, title=title):
continue
url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
title_key = title.casefold()
url_key = url.casefold()
if title_key in seen_titles or url_key in seen_urls:
continue
seen_titles.add(title_key)
seen_urls.add(url_key)
_log_url_event(url, kind="PARSE")
summary, poster = _extract_summary_and_poster(article)
hits.append(SearchHit(title=title, url=url, plot=summary, poster=poster))
return hits
def _get_base_url() -> str:
base = get_setting_string(ADDON_ID, SETTING_BASE_URL, default=DEFAULT_BASE_URL).strip()
if not base:
base = DEFAULT_BASE_URL
return base.rstrip("/")
def _absolute_url(url: str) -> str:
url = (url or "").strip()
if not url:
return ""
if url.startswith("http://") or url.startswith("https://"):
return url
if url.startswith("//"):
return f"https:{url}"
if url.startswith("/"):
return f"{_get_base_url()}{url}"
return f"{_get_base_url()}/{url.lstrip('/')}"
def _normalize_search_text(value: str) -> str:
value = (value or "").casefold()
value = re.sub(r"[^a-z0-9]+", " ", value)
value = re.sub(r"\s+", " ", value).strip()
return value
def _matches_query(query: str, *, title: str) -> bool:
normalized_query = _normalize_search_text(query)
if not normalized_query:
return False
haystack = f" {_normalize_search_text(title)} "
return f" {normalized_query} " in haystack
def _log_url_event(url: str, *, kind: str = "VISIT") -> None:
log_url(
ADDON_ID,
enabled_setting_id=GLOBAL_SETTING_LOG_URLS,
plugin_setting_id=SETTING_LOG_URLS,
log_filename="dokustreams_urls.log",
url=url,
kind=kind,
)
def _log_visit(url: str) -> None:
_log_url_event(url, kind="VISIT")
notify_url(
ADDON_ID,
heading="Doku-Streams",
url=url,
enabled_setting_id=GLOBAL_SETTING_SHOW_URL_INFO,
plugin_setting_id=SETTING_SHOW_URL_INFO,
)
def _log_response_html(url: str, body: str) -> None:
dump_response_html(
ADDON_ID,
enabled_setting_id=GLOBAL_SETTING_DUMP_HTML,
plugin_setting_id=SETTING_DUMP_HTML,
url=url,
body=body,
filename_prefix="dokustreams_response",
)
def _log_error_message(message: str) -> None:
log_error(
ADDON_ID,
enabled_setting_id=GLOBAL_SETTING_LOG_ERRORS,
plugin_setting_id=SETTING_LOG_ERRORS,
log_filename="dokustreams_errors.log",
message=message,
)
def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> BeautifulSoupT:
if requests is None or BeautifulSoup is None:
raise RuntimeError("requests/bs4 sind nicht verfuegbar.")
_log_visit(url)
sess = session or get_requests_session("dokustreams", headers=HEADERS)
response = None
try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status()
except Exception as exc:
_log_error_message(f"GET {url} failed: {exc}")
raise
try:
final_url = (response.url or url) if response is not None else url
body = (response.text or "") if response is not None else ""
if final_url != url:
_log_url_event(final_url, kind="REDIRECT")
_log_response_html(url, body)
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
class DokuStreamsPlugin(BasisPlugin):
name = "Doku-Streams"
version = "1.0.0"
prefer_source_metadata = True
def __init__(self) -> None:
self._title_to_url: Dict[str, str] = {}
self._category_to_url: Dict[str, str] = {}
self._category_page_count_cache: Dict[str, int] = {}
self._popular_cache: Optional[List[SearchHit]] = None
self._title_meta: Dict[str, tuple[str, str]] = {}
self._requests_available = REQUESTS_AVAILABLE
self.is_available = True
self.unavailable_reason: Optional[str] = None
if not self._requests_available: # pragma: no cover - optional dependency
self.is_available = False
self.unavailable_reason = (
"requests/bs4 fehlen. Installiere 'requests' und 'beautifulsoup4'."
)
if REQUESTS_IMPORT_ERROR:
print(f"DokuStreamsPlugin Importfehler: {REQUESTS_IMPORT_ERROR}")
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]:
_emit_progress(progress_callback, "Doku-Streams Suche", 15)
hits = self._search_hits(query)
_emit_progress(progress_callback, f"Treffer verarbeiten ({len(hits)})", 70)
self._title_to_url = {hit.title: hit.url for hit in hits if hit.title and hit.url}
for hit in hits:
if hit.title:
self._title_meta[hit.title] = (hit.plot, hit.poster)
titles = [hit.title for hit in hits if hit.title]
titles.sort(key=lambda value: value.casefold())
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles
def _search_hits(self, query: str) -> List[SearchHit]:
query = (query or "").strip()
if not query or not self._requests_available:
return []
search_url = _absolute_url(f"/?s={quote(query)}")
session = get_requests_session("dokustreams", headers=HEADERS)
try:
soup = _get_soup(search_url, session=session)
except Exception:
return []
return _parse_listing_hits(soup, query=query)
def capabilities(self) -> set[str]:
return {"genres", "popular_series"}
def _categories_url(self) -> str:
return _absolute_url("/kategorien/")
def _parse_categories(self, soup: BeautifulSoupT) -> Dict[str, str]:
categories: Dict[str, str] = {}
if not soup:
return categories
root = soup.select_one("ul.nested-category-list")
if root is None:
return categories
def clean_name(value: str) -> str:
value = (value or "").strip()
return re.sub(r"\\s*\\(\\d+\\)\\s*$", "", value).strip()
def walk(ul, parents: List[str]) -> None:
for li in ul.find_all("li", recursive=False):
anchor = li.find("a", href=True)
if anchor is None:
continue
name = clean_name(anchor.get_text(" ", strip=True) or "")
href = (anchor.get("href") or "").strip()
if not name or not href:
continue
child_ul = li.find("ul", class_="nested-category-list")
if child_ul is not None:
walk(child_ul, parents + [name])
else:
if parents:
label = " \u2192 ".join(parents + [name])
categories[label] = _absolute_url(href)
walk(root, [])
return categories
def _parse_top_categories(self, soup: BeautifulSoupT) -> Dict[str, str]:
categories: Dict[str, str] = {}
if not soup:
return categories
root = soup.select_one("ul.nested-category-list")
if root is None:
return categories
for li in root.find_all("li", recursive=False):
anchor = li.find("a", href=True)
if anchor is None:
continue
name = (anchor.get_text(" ", strip=True) or "").strip()
href = (anchor.get("href") or "").strip()
if not name or not href:
continue
categories[name] = _absolute_url(href)
return categories
def genres(self) -> List[str]:
if not self._requests_available:
return []
if self._category_to_url:
return sorted(self._category_to_url.keys(), key=lambda value: value.casefold())
try:
soup = _get_soup(self._categories_url(), session=get_requests_session("dokustreams", headers=HEADERS))
except Exception:
return []
parsed = self._parse_categories(soup)
if parsed:
self._category_to_url = dict(parsed)
return sorted(self._category_to_url.keys(), key=lambda value: value.casefold())
def categories(self) -> List[str]:
if not self._requests_available:
return []
try:
soup = _get_soup(self._categories_url(), session=get_requests_session("dokustreams", headers=HEADERS))
except Exception:
return []
parsed = self._parse_top_categories(soup)
if parsed:
for key, value in parsed.items():
self._category_to_url.setdefault(key, value)
return list(parsed.keys())
def genre_page_count(self, genre: str) -> int:
genre = (genre or "").strip()
if not genre:
return 1
if genre in self._category_page_count_cache:
return max(1, int(self._category_page_count_cache.get(genre, 1)))
if not self._category_to_url:
self.genres()
base_url = self._category_to_url.get(genre, "")
if not base_url:
return 1
try:
soup = _get_soup(base_url, session=get_requests_session("dokustreams", headers=HEADERS))
except Exception:
return 1
pages = _extract_last_page(soup)
self._category_page_count_cache[genre] = max(1, pages)
return self._category_page_count_cache[genre]
def titles_for_genre_page(self, genre: str, page: int) -> List[str]:
genre = (genre or "").strip()
if not genre or not self._requests_available:
return []
if not self._category_to_url:
self.genres()
base_url = self._category_to_url.get(genre, "")
if not base_url:
return []
page = max(1, int(page or 1))
url = base_url if page == 1 else f"{base_url.rstrip('/')}/page/{page}/"
try:
soup = _get_soup(url, session=get_requests_session("dokustreams", headers=HEADERS))
except Exception:
return []
hits = _parse_listing_hits(soup)
for hit in hits:
if hit.title:
self._title_meta[hit.title] = (hit.plot, hit.poster)
titles = [hit.title for hit in hits if hit.title]
self._title_to_url.update({hit.title: hit.url for hit in hits if hit.title and hit.url})
return titles
def titles_for_genre(self, genre: str) -> List[str]:
titles = self.titles_for_genre_page(genre, 1)
titles.sort(key=lambda value: value.casefold())
return titles
def _most_viewed_url(self) -> str:
return _absolute_url(MOST_VIEWED_PATH)
def popular_series(self) -> List[str]:
if not self._requests_available:
return []
if self._popular_cache is not None:
titles = [hit.title for hit in self._popular_cache if hit.title]
titles.sort(key=lambda value: value.casefold())
return titles
try:
soup = _get_soup(self._most_viewed_url(), session=get_requests_session("dokustreams", headers=HEADERS))
except Exception:
return []
hits = _parse_listing_hits(soup)
self._popular_cache = list(hits)
self._title_to_url.update({hit.title: hit.url for hit in hits if hit.title and hit.url})
for hit in hits:
if hit.title:
self._title_meta[hit.title] = (hit.plot, hit.poster)
titles = [hit.title for hit in hits if hit.title]
titles.sort(key=lambda value: value.casefold())
return titles
def metadata_for(self, title: str) -> tuple[dict[str, str], dict[str, str], list[object] | None]:
title = (title or "").strip()
if not title:
return {}, {}, None
plot, poster = self._title_meta.get(title, ("", ""))
info: dict[str, str] = {"title": title}
if plot:
info["plot"] = plot
art: dict[str, str] = {}
if poster:
art = {"thumb": poster, "poster": poster}
return info, art, None
def seasons_for(self, title: str) -> List[str]:
title = (title or "").strip()
if not title or title not in self._title_to_url:
return []
return ["Stream"]
def episodes_for(self, title: str, season: str) -> List[str]:
title = (title or "").strip()
if not title or title not in self._title_to_url:
return []
return [title]
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
title = (title or "").strip()
if not title:
return None
url = self._title_to_url.get(title)
if not url:
return None
if not self._requests_available:
return None
try:
soup = _get_soup(url, session=get_requests_session("dokustreams", headers=HEADERS))
except Exception:
return None
iframe = soup.select_one("div.fluid-width-video-wrapper iframe[src]")
if iframe is None:
iframe = soup.select_one("iframe[src*='youtube'], iframe[src*='vimeo'], iframe[src]")
if iframe is None:
return None
src = (iframe.get("src") or "").strip()
if not src:
return None
return _absolute_url(src)
# Alias für die automatische Plugin-Erkennung.
Plugin = DokuStreamsPlugin

View File

@@ -11,7 +11,7 @@ from __future__ import annotations
import json import json
import re import re
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Set from typing import Any, Callable, Dict, List, Optional, Set
from urllib.parse import urlencode, urljoin, urlsplit from urllib.parse import urlencode, urljoin, urlsplit
try: # pragma: no cover - optional dependency (Kodi dependency) try: # pragma: no cover - optional dependency (Kodi dependency)
@@ -43,7 +43,7 @@ SETTING_DUMP_HTML = "dump_html_einschalten"
SETTING_SHOW_URL_INFO = "show_url_info_einschalten" SETTING_SHOW_URL_INFO = "show_url_info_einschalten"
SETTING_LOG_ERRORS = "log_errors_einschalten" SETTING_LOG_ERRORS = "log_errors_einschalten"
DEFAULT_BASE_URL = "" DEFAULT_BASE_URL = "https://einschalten.in"
DEFAULT_INDEX_PATH = "/" DEFAULT_INDEX_PATH = "/"
DEFAULT_NEW_TITLES_PATH = "/movies/new" DEFAULT_NEW_TITLES_PATH = "/movies/new"
DEFAULT_SEARCH_PATH = "/search" DEFAULT_SEARCH_PATH = "/search"
@@ -56,6 +56,16 @@ HEADERS = {
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8", "Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
"Connection": "keep-alive", "Connection": "keep-alive",
} }
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -507,6 +517,7 @@ class EinschaltenPlugin(BasisPlugin):
"""Metadata-Plugin für eine autorisierte Quelle.""" """Metadata-Plugin für eine autorisierte Quelle."""
name = "Einschalten" name = "Einschalten"
version = "1.0.0"
def __init__(self) -> None: def __init__(self) -> None:
self.is_available = REQUESTS_AVAILABLE self.is_available = REQUESTS_AVAILABLE
@@ -525,6 +536,34 @@ class EinschaltenPlugin(BasisPlugin):
self._session = requests.Session() self._session = requests.Session()
return self._session return self._session
def _http_get_text(self, url: str, *, timeout: int = 20) -> tuple[str, str]:
_log_url(url, kind="GET")
_notify_url(url)
sess = self._get_session()
response = None
try:
response = sess.get(url, headers=HEADERS, timeout=timeout)
response.raise_for_status()
final_url = (response.url or url) if response is not None else url
body = (response.text or "") if response is not None else ""
_log_url(final_url, kind="OK")
_log_response_html(final_url, body)
return final_url, body
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _http_get_json(self, url: str, *, timeout: int = 20) -> tuple[str, Any]:
final_url, body = self._http_get_text(url, timeout=timeout)
try:
payload = json.loads(body or "{}")
except Exception:
payload = {}
return final_url, payload
def _get_base_url(self) -> str: def _get_base_url(self) -> str:
base = _get_setting_text(SETTING_BASE_URL, default=DEFAULT_BASE_URL).strip() base = _get_setting_text(SETTING_BASE_URL, default=DEFAULT_BASE_URL).strip()
return base.rstrip("/") return base.rstrip("/")
@@ -645,15 +684,9 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return "" return ""
try: try:
_log_url(url, kind="GET") _, body = self._http_get_text(url, timeout=20)
_notify_url(url) self._detail_html_by_id[movie_id] = body
sess = self._get_session() return body
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
self._detail_html_by_id[movie_id] = resp.text or ""
return resp.text or ""
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
return "" return ""
@@ -666,16 +699,8 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return {} return {}
try: try:
_log_url(url, kind="GET") _, data = self._http_get_json(url, timeout=20)
_notify_url(url) return data
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
# Some backends may return JSON with a JSON content-type; for debugging we still dump text.
_log_response_html(resp.url or url, resp.text)
data = resp.json()
return dict(data) if isinstance(data, dict) else {}
except Exception as exc: except Exception as exc:
_log_error(f"GET {url} failed: {exc}") _log_error(f"GET {url} failed: {exc}")
return {} return {}
@@ -740,14 +765,8 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return [] return []
try: try:
_log_url(url, kind="GET") _, body = self._http_get_text(url, timeout=20)
_notify_url(url) payload = _extract_ng_state_payload(body)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
return _parse_ng_state_movies(payload) return _parse_ng_state_movies(payload)
except Exception: except Exception:
return [] return []
@@ -758,14 +777,8 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return [] return []
try: try:
_log_url(url, kind="GET") _, body = self._http_get_text(url, timeout=20)
_notify_url(url) payload = _extract_ng_state_payload(body)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
movies = _parse_ng_state_movies(payload) movies = _parse_ng_state_movies(payload)
_log_debug_line(f"parse_ng_state_movies:count={len(movies)}") _log_debug_line(f"parse_ng_state_movies:count={len(movies)}")
if movies: if movies:
@@ -783,14 +796,8 @@ class EinschaltenPlugin(BasisPlugin):
if page > 1: if page > 1:
url = f"{url}?{urlencode({'page': str(page)})}" url = f"{url}?{urlencode({'page': str(page)})}"
try: try:
_log_url(url, kind="GET") _, body = self._http_get_text(url, timeout=20)
_notify_url(url) payload = _extract_ng_state_payload(body)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
movies, has_more, current_page = _parse_ng_state_movies_with_pagination(payload) movies, has_more, current_page = _parse_ng_state_movies_with_pagination(payload)
_log_debug_line(f"parse_ng_state_movies_page:page={page} count={len(movies)}") _log_debug_line(f"parse_ng_state_movies_page:page={page} count={len(movies)}")
if has_more is not None: if has_more is not None:
@@ -843,14 +850,8 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return [] return []
try: try:
_log_url(url, kind="GET") _, body = self._http_get_text(url, timeout=20)
_notify_url(url) payload = _extract_ng_state_payload(body)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
results = _parse_ng_state_search_results(payload) results = _parse_ng_state_search_results(payload)
return _filter_movies_by_title(query, results) return _filter_movies_by_title(query, results)
except Exception: except Exception:
@@ -866,13 +867,7 @@ class EinschaltenPlugin(BasisPlugin):
api_url = self._api_genres_url() api_url = self._api_genres_url()
if api_url: if api_url:
try: try:
_log_url(api_url, kind="GET") _, payload = self._http_get_json(api_url, timeout=20)
_notify_url(api_url)
sess = self._get_session()
resp = sess.get(api_url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or api_url, kind="OK")
payload = resp.json()
if isinstance(payload, list): if isinstance(payload, list):
parsed: Dict[str, int] = {} parsed: Dict[str, int] = {}
for item in payload: for item in payload:
@@ -899,14 +894,8 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return return
try: try:
_log_url(url, kind="GET") _, body = self._http_get_text(url, timeout=20)
_notify_url(url) payload = _extract_ng_state_payload(body)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
parsed = _parse_ng_state_genres(payload) parsed = _parse_ng_state_genres(payload)
if parsed: if parsed:
self._genre_id_by_name.clear() self._genre_id_by_name.clear()
@@ -914,7 +903,7 @@ class EinschaltenPlugin(BasisPlugin):
except Exception: except Exception:
return return
async def search_titles(self, query: str) -> List[str]: async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]:
if not REQUESTS_AVAILABLE: if not REQUESTS_AVAILABLE:
return [] return []
query = (query or "").strip() query = (query or "").strip()
@@ -923,9 +912,12 @@ class EinschaltenPlugin(BasisPlugin):
if not self._get_base_url(): if not self._get_base_url():
return [] return []
_emit_progress(progress_callback, "Einschalten Suche", 15)
movies = self._fetch_search_movies(query) movies = self._fetch_search_movies(query)
if not movies: if not movies:
_emit_progress(progress_callback, "Fallback: Index filtern", 45)
movies = _filter_movies_by_title(query, self._load_movies()) movies = _filter_movies_by_title(query, self._load_movies())
_emit_progress(progress_callback, f"Treffer verarbeiten ({len(movies)})", 75)
titles: List[str] = [] titles: List[str] = []
seen: set[str] = set() seen: set[str] = set()
for movie in movies: for movie in movies:
@@ -935,6 +927,7 @@ class EinschaltenPlugin(BasisPlugin):
self._id_by_title[movie.title] = movie.id self._id_by_title[movie.title] = movie.id
titles.append(movie.title) titles.append(movie.title)
titles.sort(key=lambda value: value.casefold()) titles.sort(key=lambda value: value.casefold())
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles return titles
def genres(self) -> List[str]: def genres(self) -> List[str]:
@@ -970,14 +963,8 @@ class EinschaltenPlugin(BasisPlugin):
if not url: if not url:
return [] return []
try: try:
_log_url(url, kind="GET") _, body = self._http_get_text(url, timeout=20)
_notify_url(url) payload = _extract_ng_state_payload(body)
sess = self._get_session()
resp = sess.get(url, headers=HEADERS, timeout=20)
resp.raise_for_status()
_log_url(resp.url or url, kind="OK")
_log_response_html(resp.url or url, resp.text)
payload = _extract_ng_state_payload(resp.text)
except Exception: except Exception:
return [] return []
if not isinstance(payload, dict): if not isinstance(payload, dict):
@@ -1078,3 +1065,7 @@ class EinschaltenPlugin(BasisPlugin):
return [] return []
# Backwards compatible: first page only. UI uses paging via `new_titles_page`. # Backwards compatible: first page only. UI uses paging via `new_titles_page`.
return self.new_titles_page(1) return self.new_titles_page(1)
# Alias für die automatische Plugin-Erkennung.
Plugin = EinschaltenPlugin

View File

@@ -0,0 +1,978 @@
"""Filmpalast Integration (movie-style provider).
Hinweis:
- Der Parser ist bewusst defensiv und arbeitet mit mehreren Fallback-Selektoren,
da Filmpalast-Layouts je Domain variieren koennen.
"""
from __future__ import annotations
from dataclasses import dataclass
import re
from urllib.parse import quote, urlencode
from urllib.parse import urljoin
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple
try: # pragma: no cover - optional dependency
import requests
from bs4 import BeautifulSoup # type: ignore[import-not-found]
except ImportError as exc: # pragma: no cover - optional dependency
requests = None
BeautifulSoup = None
REQUESTS_AVAILABLE = False
REQUESTS_IMPORT_ERROR = exc
else:
REQUESTS_AVAILABLE = True
REQUESTS_IMPORT_ERROR = None
from plugin_interface import BasisPlugin
from plugin_helpers import dump_response_html, get_setting_bool, get_setting_string, log_error, log_url, notify_url
from http_session_pool import get_requests_session
if TYPE_CHECKING: # pragma: no cover
from requests import Session as RequestsSession
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
else: # pragma: no cover
RequestsSession = Any
BeautifulSoupT = Any
ADDON_ID = "plugin.video.viewit"
SETTING_BASE_URL = "filmpalast_base_url"
DEFAULT_BASE_URL = "https://filmpalast.to"
DEFAULT_TIMEOUT = 20
DEFAULT_PREFERRED_HOSTERS = ["voe", "vidoza", "streamtape", "doodstream", "mixdrop"]
SERIES_HINT_PREFIX = "series://filmpalast/"
SERIES_VIEW_PATH = "/serien/view"
SEASON_EPISODE_RE = re.compile(r"\bS\s*(\d{1,2})\s*E\s*(\d{1,3})\b", re.IGNORECASE)
GLOBAL_SETTING_LOG_URLS = "debug_log_urls"
GLOBAL_SETTING_DUMP_HTML = "debug_dump_html"
GLOBAL_SETTING_SHOW_URL_INFO = "debug_show_url_info"
GLOBAL_SETTING_LOG_ERRORS = "debug_log_errors"
SETTING_LOG_URLS = "log_urls_filmpalast"
SETTING_DUMP_HTML = "dump_html_filmpalast"
SETTING_SHOW_URL_INFO = "show_url_info_filmpalast"
SETTING_LOG_ERRORS = "log_errors_filmpalast"
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
HEADERS = {
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
"Connection": "keep-alive",
}
@dataclass(frozen=True)
class SearchHit:
title: str
url: str
@dataclass(frozen=True)
class EpisodeEntry:
season: int
episode: int
suffix: str
url: str
def _get_base_url() -> str:
base = get_setting_string(ADDON_ID, SETTING_BASE_URL, default=DEFAULT_BASE_URL).strip()
if not base:
base = DEFAULT_BASE_URL
return base.rstrip("/")
def _absolute_url(url: str) -> str:
url = (url or "").strip()
if not url:
return ""
if url.startswith("http://") or url.startswith("https://"):
return url
if url.startswith("//"):
return f"https:{url}"
if url.startswith("/"):
return f"{_get_base_url()}{url}"
return f"{_get_base_url()}/{url.lstrip('/')}"
def _normalize_search_text(value: str) -> str:
value = (value or "").casefold()
value = re.sub(r"[^a-z0-9]+", " ", value)
value = re.sub(r"\s+", " ", value).strip()
return value
def _matches_query(query: str, *, title: str) -> bool:
normalized_query = _normalize_search_text(query)
if not normalized_query:
return False
haystack = f" {_normalize_search_text(title)} "
return f" {normalized_query} " in haystack
def _is_probably_content_url(url: str) -> bool:
lower = (url or "").casefold()
if not lower:
return False
block_markers = (
"/genre/",
"/kategorie/",
"/category/",
"/tag/",
"/login",
"/register",
"/kontakt",
"/impressum",
"/datenschutz",
"/dmca",
"/agb",
"javascript:",
"#",
)
if any(marker in lower for marker in block_markers):
return False
allow_markers = ("/stream/", "/film/", "/movie/", "/serien/", "/serie/", "/title/")
return any(marker in lower for marker in allow_markers)
def _log_url_event(url: str, *, kind: str = "VISIT") -> None:
log_url(
ADDON_ID,
enabled_setting_id=GLOBAL_SETTING_LOG_URLS,
plugin_setting_id=SETTING_LOG_URLS,
log_filename="filmpalast_urls.log",
url=url,
kind=kind,
)
def _log_visit(url: str) -> None:
_log_url_event(url, kind="VISIT")
notify_url(
ADDON_ID,
heading="Filmpalast",
url=url,
enabled_setting_id=GLOBAL_SETTING_SHOW_URL_INFO,
plugin_setting_id=SETTING_SHOW_URL_INFO,
)
def _log_response_html(url: str, body: str) -> None:
dump_response_html(
ADDON_ID,
enabled_setting_id=GLOBAL_SETTING_DUMP_HTML,
plugin_setting_id=SETTING_DUMP_HTML,
url=url,
body=body,
filename_prefix="filmpalast_response",
)
def _log_error_message(message: str) -> None:
log_error(
ADDON_ID,
enabled_setting_id=GLOBAL_SETTING_LOG_ERRORS,
plugin_setting_id=SETTING_LOG_ERRORS,
log_filename="filmpalast_errors.log",
message=message,
)
def _is_series_hint_url(value: str) -> bool:
return (value or "").startswith(SERIES_HINT_PREFIX)
def _series_hint_value(title: str) -> str:
safe_title = quote((title or "").strip(), safe="")
return f"{SERIES_HINT_PREFIX}{safe_title}" if safe_title else SERIES_HINT_PREFIX
def _extract_number(value: str) -> Optional[int]:
match = re.search(r"(\d+)", value or "")
if not match:
return None
try:
return int(match.group(1))
except Exception:
return None
def _strip_series_alias(title: str) -> str:
return re.sub(r"\s*\(serie\)\s*$", "", title or "", flags=re.IGNORECASE).strip()
def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> BeautifulSoupT:
if requests is None or BeautifulSoup is None:
raise RuntimeError("requests/bs4 sind nicht verfuegbar.")
_log_visit(url)
sess = session or get_requests_session("filmpalast", headers=HEADERS)
response = None
try:
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status()
except Exception as exc:
_log_error_message(f"GET {url} failed: {exc}")
raise
try:
final_url = (response.url or url) if response is not None else url
body = (response.text or "") if response is not None else ""
if final_url != url:
_log_url_event(final_url, kind="REDIRECT")
_log_response_html(url, body)
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
class FilmpalastPlugin(BasisPlugin):
name = "Filmpalast"
version = "1.0.0"
def __init__(self) -> None:
self._title_to_url: Dict[str, str] = {}
self._series_entries: Dict[str, Dict[int, Dict[int, EpisodeEntry]]] = {}
self._hoster_cache: Dict[str, Dict[str, str]] = {}
self._genre_to_url: Dict[str, str] = {}
self._genre_page_count_cache: Dict[str, int] = {}
self._alpha_to_url: Dict[str, str] = {}
self._alpha_page_count_cache: Dict[str, int] = {}
self._series_page_count_cache: Dict[int, int] = {}
self._requests_available = REQUESTS_AVAILABLE
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
self._preferred_hosters: List[str] = list(self._default_preferred_hosters)
self.is_available = True
self.unavailable_reason: Optional[str] = None
if not self._requests_available: # pragma: no cover - optional dependency
self.is_available = False
self.unavailable_reason = (
"requests/bs4 fehlen. Installiere 'requests' und 'beautifulsoup4'."
)
if REQUESTS_IMPORT_ERROR:
print(f"FilmpalastPlugin Importfehler: {REQUESTS_IMPORT_ERROR}")
def _lookup_title_url(self, title: str) -> str:
title = (title or "").strip()
if not title:
return ""
direct = self._title_to_url.get(title)
if direct:
return direct
wanted = title.casefold()
for key, value in self._title_to_url.items():
if key.casefold() == wanted and value:
return value
return ""
def _series_key_for_title(self, title: str) -> str:
title = (title or "").strip()
if not title:
return ""
if title in self._series_entries:
return title
wanted = title.casefold()
for key in self._series_entries.keys():
if key.casefold() == wanted:
return key
return ""
def _has_series_entries(self, title: str) -> bool:
return bool(self._series_key_for_title(title))
def _episode_entry_from_hit(self, hit: SearchHit) -> Optional[Tuple[str, EpisodeEntry]]:
title = (hit.title or "").strip()
if not title:
return None
marker = SEASON_EPISODE_RE.search(title)
if not marker:
return None
try:
season_number = int(marker.group(1))
episode_number = int(marker.group(2))
except Exception:
return None
series_title = re.sub(r"\s+", " ", title[: marker.start()] or "").strip(" -|:;,_")
if not series_title:
return None
suffix = re.sub(r"\s+", " ", title[marker.end() :] or "").strip(" -|:;,_")
entry = EpisodeEntry(season=season_number, episode=episode_number, suffix=suffix, url=hit.url)
return (series_title, entry)
def _add_series_entry(self, series_title: str, entry: EpisodeEntry) -> None:
if not series_title or not entry.url:
return
seasons = self._series_entries.setdefault(series_title, {})
episodes = seasons.setdefault(entry.season, {})
if entry.episode not in episodes:
episodes[entry.episode] = entry
def _ensure_series_entries_for_title(self, title: str) -> str:
series_key = self._series_key_for_title(title)
if series_key:
return series_key
original_title = (title or "").strip()
lookup_title = _strip_series_alias(original_title)
if not lookup_title:
return ""
if not self._requests_available:
return ""
wanted = _normalize_search_text(lookup_title)
hits = self._search_hits(lookup_title)
for hit in hits:
parsed = self._episode_entry_from_hit(hit)
if not parsed:
continue
series_title, entry = parsed
if wanted and _normalize_search_text(series_title) != wanted:
continue
self._add_series_entry(series_title, entry)
self._title_to_url.setdefault(series_title, _series_hint_value(series_title))
resolved = self._series_key_for_title(original_title) or self._series_key_for_title(lookup_title)
if resolved and original_title and original_title != resolved:
self._series_entries[original_title] = self._series_entries[resolved]
self._title_to_url.setdefault(original_title, _series_hint_value(resolved))
return original_title
return resolved
def _detail_url_for_selection(self, title: str, season: str, episode: str) -> str:
series_key = self._series_key_for_title(title) or self._ensure_series_entries_for_title(title)
if series_key:
season_number = _extract_number(season)
episode_number = _extract_number(episode)
if season_number is None or episode_number is None:
return ""
entry = self._series_entries.get(series_key, {}).get(season_number, {}).get(episode_number)
return entry.url if entry else ""
return self._ensure_title_url(title)
def _search_hits(self, query: str) -> List[SearchHit]:
query = (query or "").strip()
if not query:
return []
if not self._requests_available or requests is None:
return []
session = get_requests_session("filmpalast", headers=HEADERS)
search_requests = [(_absolute_url(f"/search/title/{quote(query)}"), None)]
hits: List[SearchHit] = []
seen_titles: set[str] = set()
seen_urls: set[str] = set()
for base_url, params in search_requests:
response = None
try:
request_url = base_url if not params else f"{base_url}?{urlencode(params)}"
_log_url_event(request_url, kind="GET")
_log_visit(request_url)
response = session.get(base_url, params=params, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
response.raise_for_status()
if response.url and response.url != request_url:
_log_url_event(response.url, kind="REDIRECT")
_log_response_html(request_url, response.text)
soup = BeautifulSoup(response.text, "html.parser")
except Exception as exc:
_log_error_message(f"search request failed ({base_url}): {exc}")
continue
finally:
if response is not None:
try:
response.close()
except Exception:
pass
anchors = soup.select("article.liste h2 a[href], article.liste h3 a[href]")
if not anchors:
anchors = soup.select("a[href*='/stream/'][title], a[href*='/stream/']")
for anchor in anchors:
href = (anchor.get("href") or "").strip()
if not href:
continue
url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
if not _is_probably_content_url(url):
continue
title = (anchor.get("title") or anchor.get_text(" ", strip=True)).strip()
title = (title or "").strip()
if not title:
continue
if title.casefold() in {"details/play", "play", "details"}:
continue
if not _matches_query(query, title=title):
continue
title_key = title.casefold()
url_key = url.casefold()
if title_key in seen_titles or url_key in seen_urls:
continue
seen_titles.add(title_key)
seen_urls.add(url_key)
_log_url_event(url, kind="PARSE")
hits.append(SearchHit(title=title, url=url))
if hits:
break
return hits
def _parse_listing_hits(self, soup: BeautifulSoupT, *, query: str = "") -> List[SearchHit]:
hits: List[SearchHit] = []
if not soup:
return hits
seen_titles: set[str] = set()
seen_urls: set[str] = set()
anchors = soup.select("article.liste h2 a[href], article.liste h3 a[href]")
if not anchors:
anchors = soup.select("a[href*='/stream/'][title], a[href*='/stream/']")
for anchor in anchors:
href = (anchor.get("href") or "").strip()
if not href:
continue
url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
if not _is_probably_content_url(url):
continue
title = (anchor.get("title") or anchor.get_text(" ", strip=True)).strip()
if not title:
continue
if title.casefold() in {"details/play", "play", "details"}:
continue
if query and not _matches_query(query, title=title):
continue
title_key = title.casefold()
url_key = url.casefold()
if title_key in seen_titles or url_key in seen_urls:
continue
seen_titles.add(title_key)
seen_urls.add(url_key)
_log_url_event(url, kind="PARSE")
hits.append(SearchHit(title=title, url=url))
return hits
def _apply_hits_to_title_index(self, hits: List[SearchHit]) -> List[str]:
self._title_to_url = {}
self._series_entries = {}
self._hoster_cache.clear()
movie_titles: List[str] = []
series_titles_seen: set[str] = set()
for hit in hits:
parsed = self._episode_entry_from_hit(hit)
if parsed:
series_title, entry = parsed
self._add_series_entry(series_title, entry)
if series_title.casefold() not in series_titles_seen:
self._title_to_url[series_title] = _series_hint_value(series_title)
series_titles_seen.add(series_title.casefold())
continue
title = (hit.title or "").strip()
if not title:
continue
movie_titles.append(title)
self._title_to_url[title] = hit.url
titles: List[str] = list(movie_titles)
movie_keys = {entry.casefold() for entry in movie_titles}
for series_title in sorted(self._series_entries.keys(), key=lambda value: value.casefold()):
if series_title.casefold() in movie_keys:
alias = f"{series_title} (Serie)"
self._title_to_url[alias] = self._title_to_url.get(series_title, _series_hint_value(series_title))
self._series_entries[alias] = self._series_entries[series_title]
titles.append(alias)
else:
titles.append(series_title)
titles.sort(key=lambda value: value.casefold())
return titles
async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]:
_emit_progress(progress_callback, "Filmpalast Suche", 15)
hits = self._search_hits(query)
_emit_progress(progress_callback, f"Treffer verarbeiten ({len(hits)})", 70)
titles = self._apply_hits_to_title_index(hits)
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles
def _parse_genres(self, soup: BeautifulSoupT) -> Dict[str, str]:
genres: Dict[str, str] = {}
if not soup:
return genres
for anchor in soup.select("section#genre a[href], #genre a[href], aside #genre a[href]"):
name = (anchor.get_text(" ", strip=True) or "").strip()
href = (anchor.get("href") or "").strip()
if not name or not href:
continue
if "/search/genre/" not in href:
continue
genres[name] = _absolute_url(href)
return genres
def _extract_last_page(self, soup: BeautifulSoupT) -> int:
max_page = 1
if not soup:
return max_page
for anchor in soup.select("#paging a[href], .paging a[href], a.pageing[href]"):
text = (anchor.get_text(" ", strip=True) or "").strip()
for candidate in (text, (anchor.get("href") or "").strip()):
for value in re.findall(r"(\d+)", candidate):
try:
max_page = max(max_page, int(value))
except Exception:
continue
return max_page
def capabilities(self) -> set[str]:
return {"genres", "alpha", "series_catalog"}
def _parse_alpha_links(self, soup: BeautifulSoupT) -> Dict[str, str]:
alpha: Dict[str, str] = {}
if not soup:
return alpha
for anchor in soup.select("section#movietitle a[href], #movietitle a[href], aside #movietitle a[href]"):
name = (anchor.get_text(" ", strip=True) or "").strip()
href = (anchor.get("href") or "").strip()
if not name or not href:
continue
if "/search/alpha/" not in href:
continue
if name in alpha:
continue
alpha[name] = _absolute_url(href)
return alpha
def alpha_index(self) -> List[str]:
if not self._requests_available:
return []
if self._alpha_to_url:
return list(self._alpha_to_url.keys())
try:
soup = _get_soup(_absolute_url("/"), session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return []
parsed = self._parse_alpha_links(soup)
if parsed:
self._alpha_to_url = dict(parsed)
return list(self._alpha_to_url.keys())
def alpha_page_count(self, letter: str) -> int:
letter = (letter or "").strip()
if not letter:
return 1
if letter in self._alpha_page_count_cache:
return max(1, int(self._alpha_page_count_cache.get(letter, 1)))
if not self._alpha_to_url:
self.alpha_index()
base_url = self._alpha_to_url.get(letter, "")
if not base_url:
return 1
try:
soup = _get_soup(base_url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return 1
pages = self._extract_last_page(soup)
self._alpha_page_count_cache[letter] = max(1, pages)
return self._alpha_page_count_cache[letter]
def titles_for_alpha_page(self, letter: str, page: int) -> List[str]:
letter = (letter or "").strip()
if not letter or not self._requests_available:
return []
if not self._alpha_to_url:
self.alpha_index()
base_url = self._alpha_to_url.get(letter, "")
if not base_url:
return []
page = max(1, int(page or 1))
url = base_url if page == 1 else urljoin(base_url.rstrip("/") + "/", f"page/{page}")
try:
soup = _get_soup(url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return []
hits = self._parse_listing_hits(soup)
return self._apply_hits_to_title_index(hits)
def titles_for_alpha(self, letter: str) -> List[str]:
titles = self.titles_for_alpha_page(letter, 1)
titles.sort(key=lambda value: value.casefold())
return titles
def _series_view_url(self) -> str:
return _absolute_url(SERIES_VIEW_PATH)
def series_catalog_page_count(self, page: int = 1) -> int:
if not self._requests_available:
return 1
cache_key = int(page or 1)
if cache_key in self._series_page_count_cache:
return max(1, int(self._series_page_count_cache.get(cache_key, 1)))
base_url = self._series_view_url()
if not base_url:
return 1
try:
soup = _get_soup(base_url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return 1
pages = self._extract_last_page(soup)
self._series_page_count_cache[cache_key] = max(1, pages)
return self._series_page_count_cache[cache_key]
def series_catalog_page(self, page: int) -> List[str]:
if not self._requests_available:
return []
base_url = self._series_view_url()
if not base_url:
return []
page = max(1, int(page or 1))
url = base_url if page == 1 else urljoin(base_url.rstrip("/") + "/", f"page/{page}")
try:
soup = _get_soup(url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return []
hits = self._parse_listing_hits(soup)
return self._apply_hits_to_title_index(hits)
def series_catalog_has_more(self, page: int) -> bool:
total = self.series_catalog_page_count(page)
return page < total
def genres(self) -> List[str]:
if not self._requests_available:
return []
if self._genre_to_url:
return sorted(self._genre_to_url.keys(), key=lambda value: value.casefold())
try:
soup = _get_soup(_absolute_url("/"), session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return []
parsed = self._parse_genres(soup)
if parsed:
self._genre_to_url = dict(parsed)
return sorted(self._genre_to_url.keys(), key=lambda value: value.casefold())
def genre_page_count(self, genre: str) -> int:
genre = (genre or "").strip()
if not genre:
return 1
if genre in self._genre_page_count_cache:
return max(1, int(self._genre_page_count_cache.get(genre, 1)))
if not self._genre_to_url:
self.genres()
base_url = self._genre_to_url.get(genre, "")
if not base_url:
return 1
try:
soup = _get_soup(base_url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return 1
pages = self._extract_last_page(soup)
self._genre_page_count_cache[genre] = max(1, pages)
return self._genre_page_count_cache[genre]
def titles_for_genre_page(self, genre: str, page: int) -> List[str]:
genre = (genre or "").strip()
if not genre or not self._requests_available:
return []
if not self._genre_to_url:
self.genres()
base_url = self._genre_to_url.get(genre, "")
if not base_url:
return []
page = max(1, int(page or 1))
url = base_url if page == 1 else urljoin(base_url.rstrip("/") + "/", f"page/{page}")
try:
soup = _get_soup(url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return []
hits = self._parse_listing_hits(soup)
return self._apply_hits_to_title_index(hits)
def titles_for_genre(self, genre: str) -> List[str]:
titles = self.titles_for_genre_page(genre, 1)
titles.sort(key=lambda value: value.casefold())
return titles
def _ensure_title_url(self, title: str) -> str:
title = (title or "").strip()
if not title:
return ""
direct = self._lookup_title_url(title)
if direct and _is_series_hint_url(direct):
return ""
if direct:
self._title_to_url[title] = direct
return direct
if self._has_series_entries(title) or self._ensure_series_entries_for_title(title):
self._title_to_url[title] = _series_hint_value(title)
return ""
wanted = title.casefold()
hits = self._search_hits(title)
for hit in hits:
if self._episode_entry_from_hit(hit):
continue
if hit.title.casefold() == wanted and hit.url:
self._title_to_url[title] = hit.url
return hit.url
return ""
def remember_series_url(self, title: str, series_url: str) -> None:
title = (title or "").strip()
series_url = (series_url or "").strip()
if not title or not series_url:
return
self._title_to_url[title] = series_url
self._hoster_cache.clear()
def series_url_for_title(self, title: str) -> str:
title = (title or "").strip()
if not title:
return ""
direct = self._lookup_title_url(title)
if direct:
return direct
series_key = self._series_key_for_title(title)
if series_key:
return _series_hint_value(series_key)
return ""
def is_movie(self, title: str) -> bool:
title = (title or "").strip()
if not title:
return False
direct = self._lookup_title_url(title)
if direct:
return not _is_series_hint_url(direct)
if SEASON_EPISODE_RE.search(title):
return False
if self._has_series_entries(title):
return False
if self._ensure_series_entries_for_title(title):
return False
return True
@staticmethod
def _normalize_hoster_name(name: str) -> str:
name = (name or "").strip()
if not name:
return ""
name = re.sub(r"\s+", " ", name)
return name
def _extract_hoster_links(self, soup: BeautifulSoupT) -> Dict[str, str]:
hosters: Dict[str, str] = {}
if not soup:
return hosters
# Primäres Layout: jeder Hoster in eigener UL mit hostName + Play-Link.
for block in soup.select("ul.currentStreamLinks"):
host_name_node = block.select_one("li.hostBg .hostName")
host_name = self._normalize_hoster_name(host_name_node.get_text(" ", strip=True) if host_name_node else "")
play_anchor = block.select_one("li.streamPlayBtn a[href], a.button.iconPlay[href]")
href = (play_anchor.get("href") if play_anchor else "") or ""
play_url = _absolute_url(href).strip()
if not play_url:
continue
if not host_name:
host_name = self._normalize_hoster_name(play_anchor.get_text(" ", strip=True) if play_anchor else "")
if not host_name:
host_name = "Unbekannt"
if host_name not in hosters:
hosters[host_name] = play_url
# Fallback: direkte Play-Buttons im Stream-Bereich.
if not hosters:
for anchor in soup.select("#grap-stream-list a.button.iconPlay[href], .streamLinksWrapper a.button.iconPlay[href]"):
href = (anchor.get("href") or "").strip()
play_url = _absolute_url(href).strip()
if not play_url:
continue
text_name = self._normalize_hoster_name(anchor.get_text(" ", strip=True))
host_name = text_name if text_name and text_name.casefold() not in {"play", "details play"} else "Unbekannt"
if host_name in hosters:
host_name = f"{host_name} #{len(hosters) + 1}"
hosters[host_name] = play_url
return hosters
def _hosters_for_detail_url(self, detail_url: str) -> Dict[str, str]:
detail_url = (detail_url or "").strip()
if not detail_url:
return {}
cached = self._hoster_cache.get(detail_url)
if cached is not None:
return dict(cached)
if not self._requests_available:
return {}
try:
soup = _get_soup(detail_url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return {}
hosters = self._extract_hoster_links(soup)
for url in hosters.values():
_log_url_event(url, kind="PARSE")
self._hoster_cache[detail_url] = dict(hosters)
return dict(hosters)
def seasons_for(self, title: str) -> List[str]:
title = (title or "").strip()
if not title:
return []
series_key = self._series_key_for_title(title) or self._ensure_series_entries_for_title(title)
if series_key:
seasons = sorted(self._series_entries.get(series_key, {}).keys())
return [f"Staffel {number}" for number in seasons]
detail_url = self._ensure_title_url(title)
return ["Film"] if detail_url else []
def episodes_for(self, title: str, season: str) -> List[str]:
title = (title or "").strip()
series_key = self._series_key_for_title(title) or self._ensure_series_entries_for_title(title)
if series_key:
season_number = _extract_number(season)
if season_number is None:
return []
episodes = self._series_entries.get(series_key, {}).get(season_number, {})
labels: List[str] = []
for episode_number in sorted(episodes.keys()):
entry = episodes[episode_number]
label = f"Episode {episode_number}"
if entry.suffix:
label = f"{label} - {entry.suffix}"
labels.append(label)
return labels
return ["Stream"] if self._ensure_title_url(title) else []
def available_hosters_for(self, title: str, season: str, episode: str) -> List[str]:
detail_url = self._detail_url_for_selection(title, season, episode)
return self.available_hosters_for_url(detail_url)
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
detail_url = self._detail_url_for_selection(title, season, episode)
return self.stream_link_for_url(detail_url)
def episode_url_for(self, title: str, season: str, episode: str) -> str:
detail_url = self._detail_url_for_selection(title, season, episode)
return (detail_url or "").strip()
def available_hosters_for_url(self, episode_url: str) -> List[str]:
detail_url = (episode_url or "").strip()
hosters = self._hosters_for_detail_url(detail_url)
return list(hosters.keys())
def stream_link_for_url(self, episode_url: str) -> Optional[str]:
detail_url = (episode_url or "").strip()
if not detail_url:
return None
hosters = self._hosters_for_detail_url(detail_url)
if hosters:
for preferred in self._preferred_hosters:
preferred_key = (preferred or "").strip().casefold()
if not preferred_key:
continue
for host_name, host_url in hosters.items():
if preferred_key in host_name.casefold() or preferred_key in host_url.casefold():
_log_url_event(host_url, kind="FOUND")
return host_url
first = next(iter(hosters.values()))
_log_url_event(first, kind="FOUND")
return first
if not self._requests_available:
return detail_url
try:
soup = _get_soup(detail_url, session=get_requests_session("filmpalast", headers=HEADERS))
except Exception:
return detail_url
candidates: List[str] = []
for iframe in soup.select("iframe[src]"):
src = (iframe.get("src") or "").strip()
if src:
candidates.append(_absolute_url(src))
for anchor in soup.select("a[href]"):
href = (anchor.get("href") or "").strip()
if not href:
continue
lower = href.casefold()
if "watch" in lower or "stream" in lower or "player" in lower:
candidates.append(_absolute_url(href))
deduped: List[str] = []
seen: set[str] = set()
for candidate in candidates:
key = candidate.casefold()
if key in seen:
continue
seen.add(key)
deduped.append(candidate)
if deduped:
_log_url_event(deduped[0], kind="FOUND")
return deduped[0]
return detail_url
def set_preferred_hosters(self, hosters: List[str]) -> None:
normalized = [str(hoster).strip().lower() for hoster in hosters if str(hoster).strip()]
if normalized:
self._preferred_hosters = normalized
def reset_preferred_hosters(self) -> None:
self._preferred_hosters = list(self._default_preferred_hosters)
def resolve_stream_link(self, link: str) -> Optional[str]:
if not link:
return None
try:
from resolveurl_backend import resolve as resolve_with_resolveurl
except Exception:
resolve_with_resolveurl = None
# 1) Immer zuerst den ursprünglichen Hoster-Link an ResolveURL geben.
if callable(resolve_with_resolveurl):
resolved_by_resolveurl = resolve_with_resolveurl(link)
if resolved_by_resolveurl:
_log_url_event("ResolveURL", kind="HOSTER_RESOLVER")
_log_url_event(resolved_by_resolveurl, kind="MEDIA")
return resolved_by_resolveurl
redirected = link
if self._requests_available:
response = None
try:
session = get_requests_session("filmpalast", headers=HEADERS)
response = session.get(link, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True)
response.raise_for_status()
redirected = (response.url or link).strip() or link
except Exception:
redirected = link
finally:
if response is not None:
try:
response.close()
except Exception:
pass
# 2) Danach optional die Redirect-URL nochmals auflösen.
if callable(resolve_with_resolveurl) and redirected and redirected != link:
resolved_by_resolveurl = resolve_with_resolveurl(redirected)
if resolved_by_resolveurl:
_log_url_event("ResolveURL", kind="HOSTER_RESOLVER")
_log_url_event(resolved_by_resolveurl, kind="MEDIA")
return resolved_by_resolveurl
# 3) Fallback bleibt wie bisher: direkte URL zurückgeben.
if redirected:
_log_url_event(redirected, kind="FINAL")
return redirected
return None
# Alias für die automatische Plugin-Erkennung.
Plugin = FilmpalastPlugin

File diff suppressed because it is too large Load Diff

View File

@@ -19,7 +19,7 @@ import hashlib
import os import os
import re import re
import json import json
from typing import TYPE_CHECKING, Any, Dict, List, Optional, TypeAlias from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional
from urllib.parse import urlencode, urljoin from urllib.parse import urlencode, urljoin
try: # pragma: no cover - optional dependency try: # pragma: no cover - optional dependency
@@ -51,13 +51,13 @@ if TYPE_CHECKING: # pragma: no cover
from requests import Session as RequestsSession from requests import Session as RequestsSession
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found] from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
else: # pragma: no cover else: # pragma: no cover
RequestsSession: TypeAlias = Any RequestsSession = Any
BeautifulSoupT: TypeAlias = Any BeautifulSoupT = Any
ADDON_ID = "plugin.video.viewit" ADDON_ID = "plugin.video.viewit"
SETTING_BASE_URL = "topstream_base_url" SETTING_BASE_URL = "topstream_base_url"
DEFAULT_BASE_URL = "https://www.meineseite" DEFAULT_BASE_URL = "https://topstreamfilm.live"
GLOBAL_SETTING_LOG_URLS = "debug_log_urls" GLOBAL_SETTING_LOG_URLS = "debug_log_urls"
GLOBAL_SETTING_DUMP_HTML = "debug_dump_html" GLOBAL_SETTING_DUMP_HTML = "debug_dump_html"
GLOBAL_SETTING_SHOW_URL_INFO = "debug_show_url_info" GLOBAL_SETTING_SHOW_URL_INFO = "debug_show_url_info"
@@ -78,6 +78,16 @@ HEADERS = {
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8", "Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
"Connection": "keep-alive", "Connection": "keep-alive",
} }
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
def _emit_progress(callback: ProgressCallback, message: str, percent: Optional[int] = None) -> None:
if not callable(callback):
return
try:
callback(str(message or ""), None if percent is None else int(percent))
except Exception:
return
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -106,10 +116,8 @@ def _matches_query(query: str, *, title: str, description: str) -> bool:
normalized_query = _normalize_search_text(query) normalized_query = _normalize_search_text(query)
if not normalized_query: if not normalized_query:
return False return False
haystack = _normalize_search_text(title) haystack = f" {_normalize_search_text(title)} "
if not haystack: return f" {normalized_query} " in haystack
return False
return normalized_query in haystack
def _strip_der_film_suffix(title: str) -> str: def _strip_der_film_suffix(title: str) -> str:
@@ -125,6 +133,7 @@ class TopstreamfilmPlugin(BasisPlugin):
"""Integration fuer eine HTML-basierte Suchseite.""" """Integration fuer eine HTML-basierte Suchseite."""
name = "Topstreamfilm" name = "Topstreamfilm"
version = "1.0.0"
def __init__(self) -> None: def __init__(self) -> None:
self._session: RequestsSession | None = None self._session: RequestsSession | None = None
@@ -585,15 +594,25 @@ class TopstreamfilmPlugin(BasisPlugin):
session = self._get_session() session = self._get_session()
self._log_url(url, kind="VISIT") self._log_url(url, kind="VISIT")
self._notify_url(url) self._notify_url(url)
response = None
try: try:
response = session.get(url, timeout=DEFAULT_TIMEOUT) response = session.get(url, timeout=DEFAULT_TIMEOUT)
response.raise_for_status() response.raise_for_status()
except Exception as exc: except Exception as exc:
self._log_error(f"GET {url} failed: {exc}") self._log_error(f"GET {url} failed: {exc}")
raise raise
self._log_url(response.url, kind="OK") try:
self._log_response_html(response.url, response.text) final_url = (response.url or url) if response is not None else url
return BeautifulSoup(response.text, "html.parser") body = (response.text or "") if response is not None else ""
self._log_url(final_url, kind="OK")
self._log_response_html(final_url, body)
return BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
def _get_detail_soup(self, title: str) -> Optional[BeautifulSoupT]: def _get_detail_soup(self, title: str) -> Optional[BeautifulSoupT]:
title = (title or "").strip() title = (title or "").strip()
@@ -815,7 +834,7 @@ class TopstreamfilmPlugin(BasisPlugin):
# Sonst: Serie via Streams-Accordion parsen (falls vorhanden). # Sonst: Serie via Streams-Accordion parsen (falls vorhanden).
self._parse_stream_accordion(soup, title=title) self._parse_stream_accordion(soup, title=title)
async def search_titles(self, query: str) -> List[str]: async def search_titles(self, query: str, progress_callback: ProgressCallback = None) -> List[str]:
"""Sucht Titel ueber eine HTML-Suche. """Sucht Titel ueber eine HTML-Suche.
Erwartetes HTML (Snippet): Erwartetes HTML (Snippet):
@@ -828,6 +847,7 @@ class TopstreamfilmPlugin(BasisPlugin):
query = (query or "").strip() query = (query or "").strip()
if not query: if not query:
return [] return []
_emit_progress(progress_callback, "Topstreamfilm Suche", 15)
session = self._get_session() session = self._get_session()
url = self._get_base_url() + "/" url = self._get_base_url() + "/"
@@ -835,6 +855,7 @@ class TopstreamfilmPlugin(BasisPlugin):
request_url = f"{url}?{urlencode(params)}" request_url = f"{url}?{urlencode(params)}"
self._log_url(request_url, kind="GET") self._log_url(request_url, kind="GET")
self._notify_url(request_url) self._notify_url(request_url)
response = None
try: try:
response = session.get( response = session.get(
url, url,
@@ -845,15 +866,28 @@ class TopstreamfilmPlugin(BasisPlugin):
except Exception as exc: except Exception as exc:
self._log_error(f"GET {request_url} failed: {exc}") self._log_error(f"GET {request_url} failed: {exc}")
raise raise
self._log_url(response.url, kind="OK") try:
self._log_response_html(response.url, response.text) final_url = (response.url or request_url) if response is not None else request_url
body = (response.text or "") if response is not None else ""
self._log_url(final_url, kind="OK")
self._log_response_html(final_url, body)
if BeautifulSoup is None: if BeautifulSoup is None:
return [] return []
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(body, "html.parser")
finally:
if response is not None:
try:
response.close()
except Exception:
pass
hits: List[SearchHit] = [] hits: List[SearchHit] = []
for item in soup.select("li.TPostMv"): items = soup.select("li.TPostMv")
total_items = max(1, len(items))
for idx, item in enumerate(items, start=1):
if idx == 1 or idx % 20 == 0:
_emit_progress(progress_callback, f"Treffer pruefen {idx}/{total_items}", 55)
anchor = item.select_one("a[href]") anchor = item.select_one("a[href]")
if not anchor: if not anchor:
continue continue
@@ -886,6 +920,7 @@ class TopstreamfilmPlugin(BasisPlugin):
self._title_to_url[hit.title] = hit.url self._title_to_url[hit.title] = hit.url
titles.append(hit.title) titles.append(hit.title)
self._save_title_url_cache() self._save_title_url_cache()
_emit_progress(progress_callback, f"Fertig: {len(titles)} Treffer", 95)
return titles return titles
def genres(self) -> List[str]: def genres(self) -> List[str]:

View File

@@ -1,57 +1,85 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<settings> <settings>
<category label="Logging"> <category label="Debug und Logs">
<setting id="debug_log_urls" type="bool" label="URL-Logging aktivieren (global)" default="false" /> <setting id="debug_log_urls" type="bool" label="URLs mitschreiben (global)" default="false" />
<setting id="debug_dump_html" type="bool" label="HTML-Dumps aktivieren (global)" default="false" /> <setting id="debug_dump_html" type="bool" label="HTML speichern (global)" default="false" />
<setting id="debug_show_url_info" type="bool" label="URL-Info anzeigen (global)" default="false" /> <setting id="debug_show_url_info" type="bool" label="Aktuelle URL anzeigen (global)" default="false" />
<setting id="debug_log_errors" type="bool" label="Fehler-Logging aktivieren (global)" default="false" /> <setting id="debug_log_errors" type="bool" label="Fehler mitschreiben (global)" default="false" />
<setting id="log_max_mb" type="number" label="URL-Log: max. Datei-Größe (MB)" default="5" /> <setting id="log_max_mb" type="number" label="URL-Log: maximale Dateigroesse (MB)" default="5" />
<setting id="log_max_files" type="number" label="URL-Log: max. Rotationen" default="3" /> <setting id="log_max_files" type="number" label="URL-Log: Anzahl alter Dateien" default="3" />
<setting id="dump_max_files" type="number" label="HTML-Dumps: max. Dateien pro Plugin" default="200" /> <setting id="dump_max_files" type="number" label="HTML: maximale Dateien pro Plugin" default="200" />
<setting id="log_urls_serienstream" type="bool" label="Serienstream: URL-Logging" default="false" /> <setting id="log_urls_serienstream" type="bool" label="Serienstream: URLs mitschreiben" default="false" />
<setting id="dump_html_serienstream" type="bool" label="Serienstream: HTML-Dumps" default="false" /> <setting id="dump_html_serienstream" type="bool" label="Serienstream: HTML speichern" default="false" />
<setting id="show_url_info_serienstream" type="bool" label="Serienstream: URL-Info anzeigen" default="false" /> <setting id="show_url_info_serienstream" type="bool" label="Serienstream: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_serienstream" type="bool" label="Serienstream: Fehler loggen" default="false" /> <setting id="log_errors_serienstream" type="bool" label="Serienstream: Fehler mitschreiben" default="false" />
<setting id="log_urls_aniworld" type="bool" label="Aniworld: URL-Logging" default="false" /> <setting id="log_urls_aniworld" type="bool" label="Aniworld: URLs mitschreiben" default="false" />
<setting id="dump_html_aniworld" type="bool" label="Aniworld: HTML-Dumps" default="false" /> <setting id="dump_html_aniworld" type="bool" label="Aniworld: HTML speichern" default="false" />
<setting id="show_url_info_aniworld" type="bool" label="Aniworld: URL-Info anzeigen" default="false" /> <setting id="show_url_info_aniworld" type="bool" label="Aniworld: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_aniworld" type="bool" label="Aniworld: Fehler loggen" default="false" /> <setting id="log_errors_aniworld" type="bool" label="Aniworld: Fehler mitschreiben" default="false" />
<setting id="log_urls_topstreamfilm" type="bool" label="Topstreamfilm: URL-Logging" default="false" /> <setting id="log_urls_topstreamfilm" type="bool" label="Topstreamfilm: URLs mitschreiben" default="false" />
<setting id="dump_html_topstreamfilm" type="bool" label="Topstreamfilm: HTML-Dumps" default="false" /> <setting id="dump_html_topstreamfilm" type="bool" label="Topstreamfilm: HTML speichern" default="false" />
<setting id="show_url_info_topstreamfilm" type="bool" label="Topstreamfilm: URL-Info anzeigen" default="false" /> <setting id="show_url_info_topstreamfilm" type="bool" label="Topstreamfilm: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_topstreamfilm" type="bool" label="Topstreamfilm: Fehler loggen" default="false" /> <setting id="log_errors_topstreamfilm" type="bool" label="Topstreamfilm: Fehler mitschreiben" default="false" />
<setting id="log_urls_einschalten" type="bool" label="Einschalten: URL-Logging" default="false" /> <setting id="log_urls_einschalten" type="bool" label="Einschalten: URLs mitschreiben" default="false" />
<setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML-Dumps" default="false" /> <setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML speichern" default="false" />
<setting id="show_url_info_einschalten" type="bool" label="Einschalten: URL-Info anzeigen" default="false" /> <setting id="show_url_info_einschalten" type="bool" label="Einschalten: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler loggen" default="false" /> <setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler mitschreiben" default="false" />
<setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URLs mitschreiben" default="false" />
<setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML speichern" default="false" />
<setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler mitschreiben" default="false" />
</category> </category>
<category label="TopStream"> <category label="TopStream">
<setting id="topstream_base_url" type="text" label="Domain (BASE_URL)" default="https://topstreamfilm.live" /> <setting id="topstream_base_url" type="text" label="Basis-URL" default="https://topstreamfilm.live" />
<setting id="topstream_genre_max_pages" type="number" label="Genres: max. Seiten laden (Pagination)" default="20" /> <setting id="topstreamfilm_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="topstream_genre_max_pages" type="number" label="Genres: max. Seiten laden" default="20" />
</category> </category>
<category label="SerienStream"> <category label="SerienStream">
<setting id="serienstream_base_url" type="text" label="Domain (BASE_URL)" default="https://s.to" /> <setting id="serienstream_base_url" type="text" label="Basis-URL" default="https://s.to" />
<setting id="serienstream_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category> </category>
<category label="AniWorld"> <category label="AniWorld">
<setting id="aniworld_base_url" type="text" label="Domain (BASE_URL)" default="https://aniworld.to" /> <setting id="aniworld_base_url" type="text" label="Basis-URL" default="https://aniworld.to" />
<setting id="aniworld_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category> </category>
<category label="Einschalten"> <category label="Einschalten">
<setting id="einschalten_base_url" type="text" label="Domain (BASE_URL)" default="https://einschalten.in" /> <setting id="einschalten_base_url" type="text" label="Basis-URL" default="https://einschalten.in" />
<setting id="einschalten_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Filmpalast">
<setting id="filmpalast_base_url" type="text" label="Basis-URL" default="https://filmpalast.to" />
<setting id="filmpalast_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Doku-Streams">
<setting id="doku_streams_base_url" type="text" label="Basis-URL" default="https://doku-streams.com" />
<setting id="doku_streams_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category> </category>
<category label="TMDB"> <category label="TMDB">
<setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" /> <setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" />
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" /> <setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
<setting id="tmdb_language" type="text" label="TMDB Sprache (z.B. de-DE)" default="de-DE" /> <setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" />
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: Parallelität (Prefetch, 1-20)" default="6" /> <setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: gleichzeitige Anfragen (1-20)" default="6" />
<setting id="tmdb_show_plot" type="bool" label="TMDB Plot anzeigen" default="true" /> <setting id="tmdb_show_plot" type="bool" label="TMDB Beschreibung anzeigen" default="true" />
<setting id="tmdb_show_art" type="bool" label="TMDB Poster/Thumb anzeigen" default="true" /> <setting id="tmdb_show_art" type="bool" label="TMDB Poster und Vorschaubild anzeigen" default="true" />
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" /> <setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
<setting id="tmdb_show_rating" type="bool" label="TMDB Rating anzeigen" default="true" /> <setting id="tmdb_show_rating" type="bool" label="TMDB Bewertung anzeigen" default="true" />
<setting id="tmdb_show_votes" type="bool" label="TMDB Vote-Count anzeigen" default="false" /> <setting id="tmdb_show_votes" type="bool" label="TMDB Stimmen anzeigen" default="false" />
<setting id="tmdb_show_cast" type="bool" label="TMDB Cast anzeigen" default="false" /> <setting id="tmdb_show_cast" type="bool" label="TMDB Besetzung anzeigen" default="false" />
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" /> <setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
<setting id="tmdb_genre_metadata" type="bool" label="TMDB Meta in Genre-Liste anzeigen" default="false" /> <setting id="tmdb_genre_metadata" type="bool" label="TMDB Daten in Genre-Listen anzeigen" default="false" />
<setting id="tmdb_log_requests" type="bool" label="TMDB API Requests loggen" default="false" /> <setting id="tmdb_log_requests" type="bool" label="TMDB API-Anfragen loggen" default="false" />
<setting id="tmdb_log_responses" type="bool" label="TMDB API Antworten loggen" default="false" /> <setting id="tmdb_log_responses" type="bool" label="TMDB API-Antworten loggen" default="false" />
</category>
<category label="Update">
<setting id="update_repo_url" type="text" label="Update-URL (addons.xml)" default="http://127.0.0.1:8080/repo/addons.xml" />
<setting id="run_update_check" type="action" label="Jetzt nach Updates suchen" action="RunPlugin(plugin://plugin.video.viewit/?action=check_updates)" option="close" />
<setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" />
<setting id="update_version_addon" type="text" label="ViewIT Version" default="-" enable="false" />
<setting id="update_version_serienstream" type="text" label="Serienstream Version" default="-" enable="false" />
<setting id="update_version_aniworld" type="text" label="Aniworld Version" default="-" enable="false" />
<setting id="update_version_einschalten" type="text" label="Einschalten Version" default="-" enable="false" />
<setting id="update_version_topstreamfilm" type="text" label="Topstreamfilm Version" default="-" enable="false" />
<setting id="update_version_filmpalast" type="text" label="Filmpalast Version" default="-" enable="false" />
<setting id="update_version_doku_streams" type="text" label="Doku-Streams Version" default="-" enable="false" />
</category> </category>
</settings> </settings>

View File

@@ -14,6 +14,7 @@ except ImportError: # pragma: no cover
TMDB_API_BASE = "https://api.themoviedb.org/3" TMDB_API_BASE = "https://api.themoviedb.org/3"
TMDB_IMAGE_BASE = "https://image.tmdb.org/t/p" TMDB_IMAGE_BASE = "https://image.tmdb.org/t/p"
MAX_CAST_MEMBERS = 30
_TMDB_THREAD_LOCAL = threading.local() _TMDB_THREAD_LOCAL = threading.local()
@@ -73,53 +74,17 @@ def _fetch_credits(
return [] return []
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/{kind}/{tmdb_id}/credits?{urlencode(params)}" url = f"{TMDB_API_BASE}/{kind}/{tmdb_id}/credits?{urlencode(params)}"
if callable(log): status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses)
log(f"TMDB GET {url}")
try:
response = requests.get(url, timeout=timeout)
except Exception as exc: # pragma: no cover
if callable(log):
log(f"TMDB ERROR /{kind}/{{id}}/credits request_failed error={exc!r}")
return []
status = getattr(response, "status_code", None)
if callable(log): if callable(log):
log(f"TMDB RESPONSE /{kind}/{{id}}/credits status={status}") log(f"TMDB RESPONSE /{kind}/{{id}}/credits status={status}")
if status != 200: if log_responses and payload is None and body_text:
log(f"TMDB RESPONSE_BODY /{kind}/{{id}}/credits body={body_text[:2000]}")
if status != 200 or not isinstance(payload, dict):
return [] return []
try:
payload = response.json() or {}
except Exception:
return []
if callable(log) and log_responses:
try:
dumped = json.dumps(payload, ensure_ascii=False)
except Exception:
dumped = str(payload)
log(f"TMDB RESPONSE_BODY /{kind}/{{id}}/credits body={dumped[:2000]}")
cast_payload = payload.get("cast") or [] cast_payload = payload.get("cast") or []
if callable(log): if callable(log):
log(f"TMDB CREDITS /{kind}/{{id}}/credits cast={len(cast_payload)}") log(f"TMDB CREDITS /{kind}/{{id}}/credits cast={len(cast_payload)}")
with_images: List[TmdbCastMember] = [] return _parse_cast_payload(cast_payload)
without_images: List[TmdbCastMember] = []
for entry in cast_payload:
name = (entry.get("name") or "").strip()
role = (entry.get("character") or "").strip()
thumb = _image_url(entry.get("profile_path") or "", size="w185")
if not name:
continue
member = TmdbCastMember(name=name, role=role, thumb=thumb)
if thumb:
with_images.append(member)
else:
without_images.append(member)
# Viele Kodi-Skins zeigen bei fehlendem Thumbnail Platzhalter-Köpfe.
# Bevorzugt daher Cast-Einträge mit Bild; nur wenn gar keine Bilder existieren,
# geben wir Namen ohne Bild zurück.
if with_images:
return with_images[:30]
return without_images[:30]
def _parse_cast_payload(cast_payload: object) -> List[TmdbCastMember]: def _parse_cast_payload(cast_payload: object) -> List[TmdbCastMember]:
@@ -141,8 +106,8 @@ def _parse_cast_payload(cast_payload: object) -> List[TmdbCastMember]:
else: else:
without_images.append(member) without_images.append(member)
if with_images: if with_images:
return with_images[:30] return with_images[:MAX_CAST_MEMBERS]
return without_images[:30] return without_images[:MAX_CAST_MEMBERS]
def _tmdb_get_json( def _tmdb_get_json(
@@ -163,23 +128,29 @@ def _tmdb_get_json(
if callable(log): if callable(log):
log(f"TMDB GET {url}") log(f"TMDB GET {url}")
sess = session or _get_tmdb_session() or requests.Session() sess = session or _get_tmdb_session() or requests.Session()
response = None
try: try:
response = sess.get(url, timeout=timeout) response = sess.get(url, timeout=timeout)
status = getattr(response, "status_code", None)
payload: object | None = None
body_text = ""
try:
payload = response.json()
except Exception:
try:
body_text = (response.text or "").strip()
except Exception:
body_text = ""
except Exception as exc: # pragma: no cover except Exception as exc: # pragma: no cover
if callable(log): if callable(log):
log(f"TMDB ERROR request_failed url={url} error={exc!r}") log(f"TMDB ERROR request_failed url={url} error={exc!r}")
return None, None, "" return None, None, ""
finally:
status = getattr(response, "status_code", None) if response is not None:
payload: object | None = None try:
body_text = "" response.close()
try: except Exception:
payload = response.json() pass
except Exception:
try:
body_text = (response.text or "").strip()
except Exception:
body_text = ""
if callable(log): if callable(log):
log(f"TMDB RESPONSE status={status} url={url}") log(f"TMDB RESPONSE status={status} url={url}")
@@ -214,49 +185,17 @@ def fetch_tv_episode_credits(
return [] return []
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}/episode/{episode_number}/credits?{urlencode(params)}" url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}/episode/{episode_number}/credits?{urlencode(params)}"
if callable(log): status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses)
log(f"TMDB GET {url}")
try:
response = requests.get(url, timeout=timeout)
except Exception as exc: # pragma: no cover
if callable(log):
log(f"TMDB ERROR /tv/{{id}}/season/{{n}}/episode/{{e}}/credits request_failed error={exc!r}")
return []
status = getattr(response, "status_code", None)
if callable(log): if callable(log):
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}}/episode/{{e}}/credits status={status}") log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}}/episode/{{e}}/credits status={status}")
if status != 200: if log_responses and payload is None and body_text:
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}}/episode/{{e}}/credits body={body_text[:2000]}")
if status != 200 or not isinstance(payload, dict):
return [] return []
try:
payload = response.json() or {}
except Exception:
return []
if callable(log) and log_responses:
try:
dumped = json.dumps(payload, ensure_ascii=False)
except Exception:
dumped = str(payload)
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}}/episode/{{e}}/credits body={dumped[:2000]}")
cast_payload = payload.get("cast") or [] cast_payload = payload.get("cast") or []
if callable(log): if callable(log):
log(f"TMDB CREDITS /tv/{{id}}/season/{{n}}/episode/{{e}}/credits cast={len(cast_payload)}") log(f"TMDB CREDITS /tv/{{id}}/season/{{n}}/episode/{{e}}/credits cast={len(cast_payload)}")
with_images: List[TmdbCastMember] = [] return _parse_cast_payload(cast_payload)
without_images: List[TmdbCastMember] = []
for entry in cast_payload:
name = (entry.get("name") or "").strip()
role = (entry.get("character") or "").strip()
thumb = _image_url(entry.get("profile_path") or "", size="w185")
if not name:
continue
member = TmdbCastMember(name=name, role=role, thumb=thumb)
if thumb:
with_images.append(member)
else:
without_images.append(member)
if with_images:
return with_images[:30]
return without_images[:30]
def lookup_tv_show( def lookup_tv_show(
@@ -546,27 +485,13 @@ def lookup_tv_season_summary(
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}" url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}"
if callable(log): status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses)
log(f"TMDB GET {url}")
try:
response = requests.get(url, timeout=timeout)
except Exception:
return None
status = getattr(response, "status_code", None)
if callable(log): if callable(log):
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status}") log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status}")
if status != 200: if log_responses and payload is None and body_text:
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}} body={body_text[:2000]}")
if status != 200 or not isinstance(payload, dict):
return None return None
try:
payload = response.json() or {}
except Exception:
return None
if callable(log) and log_responses:
try:
dumped = json.dumps(payload, ensure_ascii=False)
except Exception:
dumped = str(payload)
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}} body={dumped[:2000]}")
plot = (payload.get("overview") or "").strip() plot = (payload.get("overview") or "").strip()
poster_path = (payload.get("poster_path") or "").strip() poster_path = (payload.get("poster_path") or "").strip()
@@ -594,27 +519,9 @@ def lookup_tv_season(
return None return None
params = {"api_key": api_key, "language": (language or "de-DE").strip()} params = {"api_key": api_key, "language": (language or "de-DE").strip()}
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}" url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}"
if callable(log): status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses)
log(f"TMDB GET {url}") episodes = (payload or {}).get("episodes") if isinstance(payload, dict) else []
try: episodes = episodes or []
response = requests.get(url, timeout=timeout)
except Exception as exc: # pragma: no cover
if callable(log):
log(f"TMDB ERROR /tv/{{id}}/season/{{n}} request_failed error={exc!r}")
return None
status = getattr(response, "status_code", None)
payload = None
body_text = ""
try:
payload = response.json() or {}
except Exception:
try:
body_text = (response.text or "").strip()
except Exception:
body_text = ""
episodes = (payload or {}).get("episodes") or []
if callable(log): if callable(log):
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status} episodes={len(episodes)}") log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status} episodes={len(episodes)}")
if log_responses: if log_responses:

View File

@@ -1,54 +1,49 @@
# ViewIT Hauptlogik (`addon/default.py`) # ViewIT Hauptlogik (`addon/default.py`)
Dieses Dokument beschreibt den Einstiegspunkt des Addons und die zentrale Steuerlogik. Diese Datei ist der Router des Addons.
Sie verbindet Kodi UI, Plugin Calls und Playback.
## Aufgabe der Datei ## Kernaufgabe
`addon/default.py` ist der Router des Addons. Er: - Plugins laden
- lädt die PluginModule dynamisch, - Menues bauen
- stellt die KodiNavigation bereit, - Aktionen auf Plugin Methoden mappen
- übersetzt UIAktionen in PluginAufrufe, - Playback starten
- startet die Wiedergabe und verwaltet Playstate/Resume. - Playstate speichern
## Ablauf (high level) ## Ablauf
1. **PluginDiscovery**: Lädt alle `addon/plugins/*.py` (ohne `_`Prefix) und instanziiert Klassen, die von `BasisPlugin` erben. 1. Plugin Discovery fuer `addon/plugins/*.py` ohne `_` Prefix.
2. **Navigation**: Baut KodiListen (Serien/Staffeln/Episoden) auf Basis der PluginAntworten. 2. Navigation fuer Titel, Staffeln und Episoden.
3. **Playback**: Holt StreamLinks aus dem Plugin und startet die Wiedergabe. 3. Playback: Link holen, optional aufloesen, abspielen.
4. **Playstate**: Speichert ResumeDaten lokal (`playstate.json`) und setzt `playcount`/ResumeInfos. 4. Playstate: watched und resume in `playstate.json` schreiben.
## Routing & Aktionen ## Routing
Die Datei arbeitet mit URLParametern (KodiPluginStandard). Typische Aktionen: Der Router liest Query Parameter aus `sys.argv[2]`.
- `search` → Suche über ein Plugin Typische Aktionen:
- `seasons` → Staffeln für einen Titel - `search`
- `episodes` → Episoden für eine Staffel - `seasons`
- `play` → StreamLink auflösen und abspielen - `episodes`
- `play_episode`
- `play_movie`
- `play_episode_url`
Die genaue Aktion wird aus den QueryParametern gelesen und an das entsprechende Plugin delegiert. ## Playstate
- Speicherort: Addon Profilordner, Datei `playstate.json`
- Key: Plugin + Titel + Staffel + Episode
- Werte: watched, playcount, resume_position, resume_total
## Playstate (Resume/Watched) ## Wichtige Helper
- **Speicherort**: `playstate.json` im AddonProfilordner. - Plugin Loader und Discovery
- **Key**: Kombination aus PluginName, Titel, Staffel, Episode. - UI Builder fuer ListItems
- **Verwendung**: - Playstate Load/Save/Merge
- `playcount` wird gesetzt, wenn „gesehen“ markiert ist. - TMDB Merge mit Source Fallback
- `resume_position`/`resume_total` werden gesetzt, wenn vorhanden.
## Wichtige Hilfsfunktionen ## Fehlerverhalten
- **PluginLoader**: findet & instanziiert Plugins. - Importfehler pro Plugin werden isoliert behandelt.
- **UIHelper**: setzt ContentType, baut Verzeichnisseinträge. - Fehler in einem Plugin sollen das Addon nicht stoppen.
- **PlaystateHelper**: `_load_playstate`, `_save_playstate`, `_apply_playstate_to_info`. - User bekommt kurze Fehlermeldungen in Kodi.
## Fehlerbehandlung ## Erweiterung
- PluginImportfehler werden isoliert behandelt, damit das Addon nicht komplett ausfällt. Fuer neue Aktion im Router:
- NetzwerkFehler werden in Plugins abgefangen, `default.py` sollte nur saubere Fehlermeldungen weitergeben. 1. Action im `run()` Handler registrieren.
2. ListItem mit passenden Parametern bauen.
## Debugging 3. Zielmethode im Plugin bereitstellen.
- Globale DebugSettings werden über `addon/resources/settings.xml` gesteuert.
- Plugins loggen URLs/HTML optional (siehe jeweilige PluginDoku).
## Änderungen & Erweiterungen
Für neue Aktionen:
1. Neue Aktion im Router registrieren.
2. UIEinträge passend anlegen.
3. Entsprechende PluginMethode definieren oder erweitern.
## Hinweis zur Erstellung
Teile dieser Dokumentation wurden KIgestützt erstellt und bei Bedarf manuell angepasst.

View File

@@ -1,75 +1,85 @@
# ViewIT Entwicklerdoku Plugins (`addon/plugins/*_plugin.py`) # ViewIT Plugin Entwicklung (`addon/plugins/*_plugin.py`)
Diese Doku beschreibt, wie Plugins im ViewITAddon aufgebaut sind und wie neue ProviderIntegrationen entwickelt werden. Diese Datei zeigt, wie Plugins im Projekt aufgebaut sind und wie sie mit dem Router zusammenarbeiten.
## Grundlagen ## Grundlagen
- Jedes Plugin ist eine einzelne Datei unter `addon/plugins/`. - Ein Plugin ist eine Python Datei in `addon/plugins/`.
- Dateinamen **ohne** `_`Prefix werden automatisch geladen. - Dateien mit `_` Prefix werden nicht geladen.
- Jede Datei enthält eine Klasse, die von `BasisPlugin` erbt. - Plugin Klasse erbt von `BasisPlugin`.
- Optional: `Plugin = <Klasse>` als klarer Einstiegspunkt.
## PflichtMethoden (BasisPlugin) ## Pflichtmethoden
Jedes Plugin muss diese Methoden implementieren: Jedes Plugin implementiert:
- `async search_titles(query: str) -> list[str]` - `async search_titles(query: str) -> list[str]`
- `seasons_for(title: str) -> list[str]` - `seasons_for(title: str) -> list[str]`
- `episodes_for(title: str, season: str) -> list[str]` - `episodes_for(title: str, season: str) -> list[str]`
## Optionale Features (Capabilities) ## Wichtige optionale Methoden
Über `capabilities()` kann das Plugin zusätzliche Funktionen anbieten: - `stream_link_for(...)`
- `popular_series``popular_series()` - `resolve_stream_link(...)`
- `genres``genres()` + `titles_for_genre(genre)` - `metadata_for(...)`
- `latest_episodes``latest_episodes(page=1)` - `available_hosters_for(...)`
- `series_url_for_title(...)`
- `remember_series_url(...)`
- `episode_url_for(...)`
- `available_hosters_for_url(...)`
- `stream_link_for_url(...)`
## Empfohlene Struktur ## Film Provider Standard
- Konstanten für URLs/Endpoints (BASE_URL, Pfade, Templates) Wenn keine echten Staffeln existieren:
- `requests` + `bs4` optional (fehlt beides, Plugin sollte sauber deaktivieren) - `seasons_for(title)` gibt `['Film']`
- HelperFunktionen für Parsing und Normalisierung - `episodes_for(title, 'Film')` gibt `['Stream']`
- Caches für Such, Staffel und EpisodenDaten
## Suche (aktuelle Policy) ## Capabilities
- **Nur TitelMatches** Ein Plugin kann Features melden ueber `capabilities()`.
- **SubstringMatch** nach Normalisierung (Lowercase + NichtAlnum → Leerzeichen) Bekannte Werte:
- Keine Beschreibung/Plot/Meta für Matches - `popular_series`
- `genres`
- `latest_episodes`
- `new_titles`
- `alpha`
- `series_catalog`
## Namensgebung ## Suche
- PluginKlassenname: `XxxPlugin` Aktuelle Regeln fuer Suchtreffer:
- Anzeigename (Property `name`): **mit Großbuchstaben beginnen** (z.B. `Serienstream`, `Einschalten`) - Match auf Titel
- Wortbasiert
- Keine Teilwort Treffer im selben Wort
- Beschreibungen nicht fuer Match nutzen
## Settings pro Plugin ## Settings
Standard: `*_base_url` (Domain / BASE_URL) Pro Plugin meist `*_base_url`.
- Beispiele: Beispiele:
- `serienstream_base_url` - `serienstream_base_url`
- `aniworld_base_url` - `aniworld_base_url`
- `einschalten_base_url` - `einschalten_base_url`
- `topstream_base_url` - `topstream_base_url`
- `filmpalast_base_url`
- `doku_streams_base_url`
## Playback ## Playback Flow
- Wenn möglich `stream_link_for(...)` implementieren. 1. Episode oder Film auswaehlen.
- Optional `available_hosters_for(...)`/`resolve_stream_link(...)` für HosterAuflösung. 2. Optional Hosterliste anzeigen.
3. `stream_link_for` oder `stream_link_for_url` aufrufen.
4. `resolve_stream_link` aufrufen.
5. Finale URL an Kodi geben.
## Debugging ## Logging
Global gesteuert über Settings: Nutze Helper aus `addon/plugin_helpers.py`:
- `debug_log_urls`
- `debug_dump_html`
- `debug_show_url_info`
Plugins sollten die Helper aus `addon/plugin_helpers.py` nutzen:
- `log_url(...)` - `log_url(...)`
- `dump_response_html(...)` - `dump_response_html(...)`
- `notify_url(...)` - `notify_url(...)`
## Template ## Build und Checks
`addon/plugins/_template_plugin.py` dient als Startpunkt für neue Provider. - ZIP: `./scripts/build_kodi_zip.sh`
- Addon Ordner: `./scripts/build_install_addon.sh`
- Manifest: `python3 scripts/generate_plugin_manifest.py`
- Snapshot Checks: `python3 qa/run_plugin_snapshots.py`
## Build & Test ## Kurze Checkliste
- ZIP bauen: `./scripts/build_kodi_zip.sh` - `name` gesetzt und korrekt
- AddonOrdner: `./scripts/build_install_addon.sh` - `*_base_url` in Settings vorhanden
- Suche liefert nur passende Titel
## BeispielCheckliste - Playback Methoden vorhanden
- [ ] `name` korrekt gesetzt - Fehler und Timeouts behandelt
- [ ] `*_base_url` in Settings vorhanden - Cache nur da, wo er Zeit spart
- [ ] Suche matcht nur Titel
- [ ] Fehlerbehandlung und Timeouts vorhanden
- [ ] Optional: Caches für Performance
## Hinweis zur Erstellung
Teile dieser Dokumentation wurden KIgestützt erstellt und bei Bedarf manuell angepasst.

104
docs/PLUGIN_MANIFEST.json Normal file
View File

@@ -0,0 +1,104 @@
{
"schema_version": 1,
"plugins": [
{
"file": "addon/plugins/aniworld_plugin.py",
"module": "aniworld_plugin",
"name": "Aniworld",
"class": "AniworldPlugin",
"version": "1.0.0",
"capabilities": [
"genres",
"latest_episodes",
"popular_series"
],
"prefer_source_metadata": false,
"base_url_setting": "aniworld_base_url",
"available": true,
"unavailable_reason": null,
"error": null
},
{
"file": "addon/plugins/dokustreams_plugin.py",
"module": "dokustreams_plugin",
"name": "Doku-Streams",
"class": "DokuStreamsPlugin",
"version": "1.0.0",
"capabilities": [
"genres",
"popular_series"
],
"prefer_source_metadata": true,
"base_url_setting": "doku_streams_base_url",
"available": true,
"unavailable_reason": null,
"error": null
},
{
"file": "addon/plugins/einschalten_plugin.py",
"module": "einschalten_plugin",
"name": "Einschalten",
"class": "EinschaltenPlugin",
"version": "1.0.0",
"capabilities": [
"genres",
"new_titles"
],
"prefer_source_metadata": false,
"base_url_setting": "einschalten_base_url",
"available": true,
"unavailable_reason": null,
"error": null
},
{
"file": "addon/plugins/filmpalast_plugin.py",
"module": "filmpalast_plugin",
"name": "Filmpalast",
"class": "FilmpalastPlugin",
"version": "1.0.0",
"capabilities": [
"alpha",
"genres",
"series_catalog"
],
"prefer_source_metadata": false,
"base_url_setting": "filmpalast_base_url",
"available": true,
"unavailable_reason": null,
"error": null
},
{
"file": "addon/plugins/serienstream_plugin.py",
"module": "serienstream_plugin",
"name": "Serienstream",
"class": "SerienstreamPlugin",
"version": "1.0.0",
"capabilities": [
"genres",
"latest_episodes",
"popular_series"
],
"prefer_source_metadata": false,
"base_url_setting": "serienstream_base_url",
"available": true,
"unavailable_reason": null,
"error": null
},
{
"file": "addon/plugins/topstreamfilm_plugin.py",
"module": "topstreamfilm_plugin",
"name": "Topstreamfilm",
"class": "TopstreamfilmPlugin",
"version": "1.0.0",
"capabilities": [
"genres",
"popular_series"
],
"prefer_source_metadata": false,
"base_url_setting": "topstream_base_url",
"available": true,
"unavailable_reason": null,
"error": null
}
]
}

View File

@@ -1,95 +1,71 @@
## ViewIt Plugin-System # ViewIT Plugin System
Dieses Dokument beschreibt, wie das Plugin-System von **ViewIt** funktioniert und wie die Community neue Integrationen hinzufügen kann. Dieses Dokument beschreibt Laden, Vertrag und Betrieb der Plugins.
### Überblick ## Ueberblick
Der Router laedt Provider Integrationen aus `addon/plugins/*.py`.
Aktive Plugins werden instanziiert und im UI genutzt.
ViewIt lädt Provider-Integrationen dynamisch aus `addon/plugins/*.py`. Jede Datei enthält eine Klasse, die von `BasisPlugin` erbt. Beim Start werden alle Plugins instanziiert und nur aktiv genutzt, wenn sie verfügbar sind. Relevante Dateien:
- `addon/default.py`
- `addon/plugin_interface.py`
- `docs/DEFAULT_ROUTER.md`
- `docs/PLUGIN_DEVELOPMENT.md`
Weitere Details: ## Aktuelle Plugins
- `docs/DEFAULT_ROUTER.md` (Hauptlogik in `addon/default.py`) - `serienstream_plugin.py`
- `docs/PLUGIN_DEVELOPMENT.md` (Entwicklerdoku für Plugins) - `topstreamfilm_plugin.py`
- `einschalten_plugin.py`
- `aniworld_plugin.py`
- `filmpalast_plugin.py`
- `dokustreams_plugin.py`
- `_template_plugin.py` (Vorlage)
### Aktuelle Plugins ## Discovery Ablauf
In `addon/default.py`:
1. Finde `*.py` in `addon/plugins/`
2. Ueberspringe Dateien mit `_` Prefix
3. Importiere Modul
4. Nutze `Plugin = <Klasse>`, falls vorhanden
5. Sonst instanziiere `BasisPlugin` Subklassen deterministisch
6. Ueberspringe Plugins mit `is_available = False`
- `serienstream_plugin.py` Serienstream (s.to) ## Basis Interface
- `topstreamfilm_plugin.py` Topstreamfilm `BasisPlugin` definiert den Kern:
- `einschalten_plugin.py` Einschalten - `search_titles`
- `aniworld_plugin.py` Aniworld - `seasons_for`
- `_template_plugin.py` Vorlage für neue Plugins - `episodes_for`
### Plugin-Discovery (Ladeprozess) Weitere Methoden sind optional und werden nur genutzt, wenn vorhanden.
Der Loader in `addon/default.py`: ## Capabilities
Plugins koennen Features aktiv melden.
Typische Werte:
- `popular_series`
- `genres`
- `latest_episodes`
- `new_titles`
- `alpha`
- `series_catalog`
1. Sucht alle `*.py` in `addon/plugins/` Das UI zeigt nur Menues fuer aktiv gemeldete Features.
2. Überspringt Dateien, die mit `_` beginnen
3. Lädt Module dynamisch
4. Instanziert Klassen, die von `BasisPlugin` erben
5. Ignoriert Plugins mit `is_available = False`
Damit bleiben fehlerhafte Plugins isoliert und blockieren nicht das gesamte Add-on. ## Metadaten Quelle
`prefer_source_metadata = True` bedeutet:
- Quelle zuerst
- TMDB nur Fallback
### BasisPlugin verpflichtende Methoden ## Stabilitaet
- Keine Netz Calls im Import Block.
- Fehler im Plugin muessen lokal behandelt werden.
- Ein defektes Plugin darf andere Plugins nicht blockieren.
Definiert in `addon/plugin_interface.py`: ## Build
Kodi ZIP bauen:
- `async search_titles(query: str) -> list[str]` ```bash
- `seasons_for(title: str) -> list[str]`
- `episodes_for(title: str, season: str) -> list[str]`
### Optionale Features (Capabilities)
Plugins können zusätzliche Features anbieten:
- `capabilities() -> set[str]`
- `popular_series`: liefert beliebte Serien
- `genres`: Genre-Liste verfügbar
- `latest_episodes`: neue Episoden verfügbar
- `popular_series() -> list[str]`
- `genres() -> list[str]`
- `titles_for_genre(genre: str) -> list[str]`
- `latest_episodes(page: int = 1) -> list[LatestEpisode]` (wenn angeboten)
ViewIt zeigt im UI nur die Features an, die ein Plugin tatsächlich liefert.
### Plugin-Struktur (empfohlen)
Eine Integration sollte typischerweise bieten:
- Konstante `BASE_URL`
- `search_titles()` mit Provider-Suche
- `seasons_for()` und `episodes_for()` mit HTML-Parsing
- `stream_link_for()` optional für direkte Playback-Links
- Optional: `available_hosters_for()` oder Provider-spezifische Helfer
Als Startpunkt dient `addon/plugins/_template_plugin.py`.
### Community-Erweiterungen (Workflow)
1. Fork/Branch erstellen
2. Neue Datei unter `addon/plugins/` hinzufügen (z.B. `meinprovider_plugin.py`)
3. Klasse erstellen, die `BasisPlugin` implementiert
4. In Kodi testen (ZIP bauen, installieren)
5. PR öffnen
### Qualitätsrichtlinien
- Keine Netzwerkzugriffe im Import-Top-Level
- Netzwerkzugriffe nur in Methoden (z.B. `search_titles`)
- Fehler sauber abfangen und verständliche Fehlermeldungen liefern
- Kein globaler Zustand, der across instances überrascht
- Provider-spezifische Parser in Helper-Funktionen kapseln
### Debugging & Logs
Hilfreiche Logs werden nach `userdata/addon_data/plugin.video.viewit/logs/` geschrieben.
Provider sollten URL-Logging optional halten (Settings).
### ZIP-Build
```
./scripts/build_kodi_zip.sh ./scripts/build_kodi_zip.sh
``` ```
Das ZIP liegt anschließend unter `dist/plugin.video.viewit-<version>.zip`. Ergebnis:
`dist/plugin.video.viewit-<version>.zip`

20
pyproject.toml Normal file
View File

@@ -0,0 +1,20 @@
[tool.pytest.ini_options]
addopts = "-q --cov=addon --cov-report=term-missing"
python_files = ["test_*.py"]
norecursedirs = ["scripts"]
markers = [
"live: real HTTP requests (set LIVE_TESTS=1 to run)",
"perf: performance benchmarks",
]
[tool.coverage.run]
source = ["addon"]
branch = true
omit = [
"*/__pycache__/*",
"addon/resources/*",
]
[tool.coverage.report]
show_missing = true
skip_empty = true

73
qa/plugin_snapshots.json Normal file
View File

@@ -0,0 +1,73 @@
{
"snapshots": {
"Serienstream::search_titles::trek": [
"Star Trek: Lower Decks",
"Star Trek: Prodigy",
"Star Trek: The Animated Series",
"Inside Star Trek",
"Raumschiff Enterprise - Star Trek: The Original Series",
"Star Trek: Deep Space Nine",
"Star Trek: Discovery",
"Star Trek: Enterprise",
"Star Trek: Picard",
"Star Trek: Raumschiff Voyager",
"Star Trek: Short Treks",
"Star Trek: Starfleet Academy",
"Star Trek: Strange New Worlds",
"Star Trek: The Next Generation"
],
"Aniworld::search_titles::naruto": [
"Naruto",
"Naruto Shippuden",
"Boruto: Naruto Next Generations",
"Naruto Spin-Off: Rock Lee &amp; His Ninja Pals"
],
"Topstreamfilm::search_titles::matrix": [
"Darkdrive Verschollen in der Matrix",
"Matrix Reloaded",
"Armitage III: Poly Matrix",
"Matrix Resurrections",
"Matrix",
"Matrix Revolutions",
"Matrix Fighters"
],
"Einschalten::new_titles_page::1": [
"Miracle: Das Eishockeywunder von 1980",
"No Escape - Grizzly Night",
"Kidnapped: Der Fall Elizabeth Smart",
"The Internship",
"The Rip",
"Die Toten vom Bodensee Schicksalsrad",
"People We Meet on Vacation",
"Anaconda",
"Even If This Love Disappears Tonight",
"Die Stunde der Mutigen",
"10DANCE",
"SpongeBob Schwammkopf: Piraten Ahoi!",
"Ella McCay",
"Merv",
"Elmo and Mark Rober's Merry Giftmas",
"Als mein Vater Weihnachten rettete 2",
"Die Fraggles: Der erste Schnee",
"Gregs Tagebuch 3: Jetzt reicht's!",
"Not Without Hope",
"Five Nights at Freddy's 2"
],
"Filmpalast::search_titles::trek": [
"Star Trek",
"Star Trek - Der Film",
"Star Trek 2 - Der Zorn des Khan",
"Star Trek 9 Der Aufstand",
"Star Trek: Nemesis",
"Star Trek: Section 31",
"Star Trek: Starfleet Academy",
"Star Trek: Strange New Worlds"
],
"Doku-Streams::search_titles::japan": [
"Deutsche im Knast - Japan und die Disziplin",
"Die Meerfrauen von Japan",
"Japan - Land der Moderne und Tradition",
"Japan im Zweiten Weltkrieg - Der Fall des Kaiserreichs"
]
}
}

153
qa/run_plugin_snapshots.py Executable file
View File

@@ -0,0 +1,153 @@
#!/usr/bin/env python3
"""Run live snapshot checks for plugins.
Use --update to refresh stored snapshots.
"""
from __future__ import annotations
import argparse
import asyncio
import importlib.util
import inspect
import json
import sys
from pathlib import Path
from typing import Any
ROOT_DIR = Path(__file__).resolve().parents[1]
PLUGIN_DIR = ROOT_DIR / "addon" / "plugins"
SNAPSHOT_PATH = ROOT_DIR / "qa" / "plugin_snapshots.json"
sys.path.insert(0, str(ROOT_DIR / "addon"))
try:
from plugin_interface import BasisPlugin # type: ignore
except Exception as exc: # pragma: no cover
raise SystemExit(f"Failed to import BasisPlugin: {exc}")
CONFIG = [
{"plugin": "Serienstream", "method": "search_titles", "args": ["trek"], "max_items": 20},
{"plugin": "Aniworld", "method": "search_titles", "args": ["naruto"], "max_items": 20},
{"plugin": "Topstreamfilm", "method": "search_titles", "args": ["matrix"], "max_items": 20},
{"plugin": "Einschalten", "method": "new_titles_page", "args": [1], "max_items": 20},
{"plugin": "Filmpalast", "method": "search_titles", "args": ["trek"], "max_items": 20},
{"plugin": "Doku-Streams", "method": "search_titles", "args": ["japan"], "max_items": 20},
]
def _import_module(path: Path):
spec = importlib.util.spec_from_file_location(path.stem, path)
if spec is None or spec.loader is None:
raise ImportError(f"Missing spec for {path}")
module = importlib.util.module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
return module
def _discover_plugins() -> dict[str, BasisPlugin]:
plugins: dict[str, BasisPlugin] = {}
for file_path in sorted(PLUGIN_DIR.glob("*.py")):
if file_path.name.startswith("_"):
continue
module = _import_module(file_path)
preferred = getattr(module, "Plugin", None)
if inspect.isclass(preferred) and issubclass(preferred, BasisPlugin) and preferred is not BasisPlugin:
classes = [preferred]
else:
classes = [
obj
for obj in module.__dict__.values()
if inspect.isclass(obj) and issubclass(obj, BasisPlugin) and obj is not BasisPlugin
]
classes.sort(key=lambda cls: cls.__name__.casefold())
for cls in classes:
instance = cls()
name = str(getattr(instance, "name", "") or "").strip()
if name and name not in plugins:
plugins[name] = instance
return plugins
def _normalize_titles(value: Any, max_items: int) -> list[str]:
if not value:
return []
titles = [str(item).strip() for item in list(value) if item and str(item).strip()]
seen = set()
normalized: list[str] = []
for title in titles:
key = title.casefold()
if key in seen:
continue
seen.add(key)
normalized.append(title)
if len(normalized) >= max_items:
break
return normalized
def _snapshot_key(entry: dict[str, Any]) -> str:
args = entry.get("args", [])
return f"{entry['plugin']}::{entry['method']}::{','.join(str(a) for a in args)}"
def _call_method(plugin: BasisPlugin, method_name: str, args: list[Any]):
method = getattr(plugin, method_name, None)
if not callable(method):
raise RuntimeError(f"Method missing: {method_name}")
result = method(*args)
if asyncio.iscoroutine(result):
return asyncio.run(result)
return result
def main() -> int:
parser = argparse.ArgumentParser()
parser.add_argument("--update", action="store_true")
args = parser.parse_args()
snapshots: dict[str, Any] = {}
if SNAPSHOT_PATH.exists():
snapshots = json.loads(SNAPSHOT_PATH.read_text(encoding="utf-8"))
data = snapshots.get("snapshots", {}) if isinstance(snapshots, dict) else {}
if args.update:
data = {}
plugins = _discover_plugins()
errors = []
for entry in CONFIG:
plugin_name = entry["plugin"]
plugin = plugins.get(plugin_name)
if plugin is None:
errors.append(f"Plugin missing: {plugin_name}")
continue
key = _snapshot_key(entry)
try:
result = _call_method(plugin, entry["method"], entry.get("args", []))
normalized = _normalize_titles(result, entry.get("max_items", 20))
except Exception as exc:
errors.append(f"Snapshot error: {key} ({exc})")
if args.update:
data[key] = {"error": str(exc)}
continue
if args.update:
data[key] = normalized
else:
expected = data.get(key)
if expected != normalized:
errors.append(f"Snapshot mismatch: {key}\nExpected: {expected}\nActual: {normalized}")
if args.update:
SNAPSHOT_PATH.parent.mkdir(parents=True, exist_ok=True)
SNAPSHOT_PATH.write_text(json.dumps({"snapshots": data}, indent=2, ensure_ascii=False) + "\n", encoding="utf-8")
if errors:
for err in errors:
print(err)
return 1
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,17 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<addon id="repository.viewit" name="ViewIT Repository" version="1.0.0" provider-name="ViewIT">
<extension point="xbmc.addon.repository" name="ViewIT Repository">
<dir>
<info compressed="false">http://127.0.0.1:8080/repo/addons.xml</info>
<checksum>http://127.0.0.1:8080/repo/addons.xml.md5</checksum>
<datadir zip="true">http://127.0.0.1:8080/repo/</datadir>
</dir>
</extension>
<extension point="xbmc.addon.metadata">
<summary lang="de_DE">Lokales Repository fuer ViewIT Updates</summary>
<summary lang="en_GB">Local repository for ViewIT updates</summary>
<description lang="de_DE">Stellt das ViewIT Addon ueber ein Kodi Repository bereit.</description>
<description lang="en_GB">Provides the ViewIT addon via a Kodi repository.</description>
<platform>all</platform>
</extension>
</addon>

2
requirements-dev.txt Normal file
View File

@@ -0,0 +1,2 @@
pytest>=9,<10
pytest-cov>=5,<8

View File

@@ -37,6 +37,6 @@ ZIP_PATH="${INSTALL_DIR}/${ZIP_NAME}"
ADDON_DIR="$("${ROOT_DIR}/scripts/build_install_addon.sh" >/dev/null; echo "${INSTALL_DIR}/${ADDON_ID}")" ADDON_DIR="$("${ROOT_DIR}/scripts/build_install_addon.sh" >/dev/null; echo "${INSTALL_DIR}/${ADDON_ID}")"
rm -f "${ZIP_PATH}" rm -f "${ZIP_PATH}"
(cd "${INSTALL_DIR}" && zip -r "${ZIP_NAME}" "$(basename "${ADDON_DIR}")" >/dev/null) python3 "${ROOT_DIR}/scripts/zip_deterministic.py" "${ZIP_PATH}" "${ADDON_DIR}" >/dev/null
echo "${ZIP_PATH}" echo "${ZIP_PATH}"

126
scripts/build_local_kodi_repo.sh Executable file
View File

@@ -0,0 +1,126 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
DIST_DIR="${ROOT_DIR}/dist"
REPO_DIR="${DIST_DIR}/repo"
PLUGIN_ADDON_XML="${ROOT_DIR}/addon/addon.xml"
REPO_SRC_DIR="${ROOT_DIR}/repository.viewit"
REPO_ADDON_XML="${REPO_SRC_DIR}/addon.xml"
REPO_BASE_URL="${REPO_BASE_URL:-http://127.0.0.1:8080/repo}"
if [[ ! -f "${PLUGIN_ADDON_XML}" ]]; then
echo "Missing: ${PLUGIN_ADDON_XML}" >&2
exit 1
fi
if [[ ! -f "${REPO_ADDON_XML}" ]]; then
echo "Missing: ${REPO_ADDON_XML}" >&2
exit 1
fi
mkdir -p "${REPO_DIR}"
read -r ADDON_ID ADDON_VERSION < <(python3 - "${PLUGIN_ADDON_XML}" <<'PY'
import sys
import xml.etree.ElementTree as ET
root = ET.parse(sys.argv[1]).getroot()
print(root.attrib.get("id", "plugin.video.viewit"), root.attrib.get("version", "0.0.0"))
PY
)
PLUGIN_ZIP="$("${ROOT_DIR}/scripts/build_kodi_zip.sh")"
PLUGIN_ZIP_NAME="$(basename "${PLUGIN_ZIP}")"
PLUGIN_ADDON_DIR_IN_REPO="${REPO_DIR}/${ADDON_ID}"
mkdir -p "${PLUGIN_ADDON_DIR_IN_REPO}"
cp -f "${PLUGIN_ZIP}" "${PLUGIN_ADDON_DIR_IN_REPO}/${PLUGIN_ZIP_NAME}"
read -r REPO_ADDON_ID REPO_ADDON_VERSION < <(python3 - "${REPO_ADDON_XML}" <<'PY'
import sys
import xml.etree.ElementTree as ET
root = ET.parse(sys.argv[1]).getroot()
print(root.attrib.get("id", "repository.viewit"), root.attrib.get("version", "0.0.0"))
PY
)
TMP_DIR="$(mktemp -d)"
trap 'rm -rf "${TMP_DIR}"' EXIT
TMP_REPO_ADDON_DIR="${TMP_DIR}/${REPO_ADDON_ID}"
mkdir -p "${TMP_REPO_ADDON_DIR}"
if command -v rsync >/dev/null 2>&1; then
rsync -a --delete "${REPO_SRC_DIR}/" "${TMP_REPO_ADDON_DIR}/"
else
cp -a "${REPO_SRC_DIR}/." "${TMP_REPO_ADDON_DIR}/"
fi
python3 - "${TMP_REPO_ADDON_DIR}/addon.xml" "${REPO_BASE_URL}" <<'PY'
import sys
import xml.etree.ElementTree as ET
addon_xml = sys.argv[1]
base_url = sys.argv[2].rstrip("/")
tree = ET.parse(addon_xml)
root = tree.getroot()
dir_node = root.find(".//dir")
if dir_node is None:
raise SystemExit("Invalid repository addon.xml: missing <dir>")
info = dir_node.find("info")
checksum = dir_node.find("checksum")
datadir = dir_node.find("datadir")
if info is None or checksum is None or datadir is None:
raise SystemExit("Invalid repository addon.xml: missing info/checksum/datadir")
info.text = f"{base_url}/addons.xml"
checksum.text = f"{base_url}/addons.xml.md5"
datadir.text = f"{base_url}/"
tree.write(addon_xml, encoding="utf-8", xml_declaration=True)
PY
REPO_ZIP_NAME="${REPO_ADDON_ID}-${REPO_ADDON_VERSION}.zip"
REPO_ZIP_PATH="${REPO_DIR}/${REPO_ZIP_NAME}"
rm -f "${REPO_ZIP_PATH}"
python3 "${ROOT_DIR}/scripts/zip_deterministic.py" "${REPO_ZIP_PATH}" "${TMP_REPO_ADDON_DIR}" >/dev/null
REPO_ADDON_DIR_IN_REPO="${REPO_DIR}/${REPO_ADDON_ID}"
mkdir -p "${REPO_ADDON_DIR_IN_REPO}"
cp -f "${REPO_ZIP_PATH}" "${REPO_ADDON_DIR_IN_REPO}/${REPO_ZIP_NAME}"
python3 - "${PLUGIN_ADDON_XML}" "${TMP_REPO_ADDON_DIR}/addon.xml" "${REPO_DIR}/addons.xml" <<'PY'
import sys
import xml.etree.ElementTree as ET
from pathlib import Path
plugin_xml = Path(sys.argv[1])
repo_xml = Path(sys.argv[2])
target = Path(sys.argv[3])
addons = ET.Element("addons")
for source in (plugin_xml, repo_xml):
root = ET.parse(source).getroot()
addons.append(root)
target.write_text('<?xml version="1.0" encoding="UTF-8"?>\n' + ET.tostring(addons, encoding="unicode"), encoding="utf-8")
PY
python3 - "${REPO_DIR}/addons.xml" "${REPO_DIR}/addons.xml.md5" <<'PY'
import hashlib
import sys
from pathlib import Path
addons_xml = Path(sys.argv[1])
md5_file = Path(sys.argv[2])
md5 = hashlib.md5(addons_xml.read_bytes()).hexdigest()
md5_file.write_text(md5, encoding="ascii")
PY
echo "Repo built:"
echo " ${REPO_DIR}/addons.xml"
echo " ${REPO_DIR}/addons.xml.md5"
echo " ${REPO_ZIP_PATH}"
echo " ${PLUGIN_ADDON_DIR_IN_REPO}/${PLUGIN_ZIP_NAME}"
echo " ${REPO_ADDON_DIR_IN_REPO}/${REPO_ZIP_NAME}"

View File

@@ -0,0 +1,106 @@
#!/usr/bin/env python3
"""Generate a JSON manifest for addon plugins."""
from __future__ import annotations
import importlib.util
import inspect
import json
import sys
from pathlib import Path
ROOT_DIR = Path(__file__).resolve().parents[1]
PLUGIN_DIR = ROOT_DIR / "addon" / "plugins"
OUTPUT_PATH = ROOT_DIR / "docs" / "PLUGIN_MANIFEST.json"
sys.path.insert(0, str(ROOT_DIR / "addon"))
try:
from plugin_interface import BasisPlugin # type: ignore
except Exception as exc: # pragma: no cover
raise SystemExit(f"Failed to import BasisPlugin: {exc}")
def _import_module(path: Path):
spec = importlib.util.spec_from_file_location(path.stem, path)
if spec is None or spec.loader is None:
raise ImportError(f"Missing spec for {path}")
module = importlib.util.module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
return module
def _collect_plugins():
plugins = []
for file_path in sorted(PLUGIN_DIR.glob("*.py")):
if file_path.name.startswith("_"):
continue
entry = {
"file": str(file_path.relative_to(ROOT_DIR)),
"module": file_path.stem,
"name": None,
"class": None,
"version": None,
"capabilities": [],
"prefer_source_metadata": False,
"base_url_setting": None,
"available": None,
"unavailable_reason": None,
"error": None,
}
try:
module = _import_module(file_path)
preferred = getattr(module, "Plugin", None)
if inspect.isclass(preferred) and issubclass(preferred, BasisPlugin) and preferred is not BasisPlugin:
classes = [preferred]
else:
classes = [
obj
for obj in module.__dict__.values()
if inspect.isclass(obj) and issubclass(obj, BasisPlugin) and obj is not BasisPlugin
]
classes.sort(key=lambda cls: cls.__name__.casefold())
if not classes:
entry["error"] = "No plugin classes found"
plugins.append(entry)
continue
cls = classes[0]
instance = cls()
entry["class"] = cls.__name__
entry["name"] = str(getattr(instance, "name", "") or "") or None
entry["version"] = str(getattr(instance, "version", "0.0.0") or "0.0.0")
entry["prefer_source_metadata"] = bool(getattr(instance, "prefer_source_metadata", False))
entry["available"] = bool(getattr(instance, "is_available", True))
entry["unavailable_reason"] = getattr(instance, "unavailable_reason", None)
try:
caps = instance.capabilities() # type: ignore[call-arg]
entry["capabilities"] = sorted([str(c) for c in caps]) if caps else []
except Exception:
entry["capabilities"] = []
entry["base_url_setting"] = getattr(module, "SETTING_BASE_URL", None)
except Exception as exc: # pragma: no cover
entry["error"] = str(exc)
plugins.append(entry)
plugins.sort(key=lambda item: (item.get("name") or item["module"]).casefold())
return plugins
def main() -> int:
if not PLUGIN_DIR.exists():
raise SystemExit("Plugin directory missing")
manifest = {
"schema_version": 1,
"plugins": _collect_plugins(),
}
OUTPUT_PATH.parent.mkdir(parents=True, exist_ok=True)
OUTPUT_PATH.write_text(json.dumps(manifest, indent=2, ensure_ascii=False) + "\n", encoding="utf-8")
print(str(OUTPUT_PATH))
return 0
if __name__ == "__main__":
raise SystemExit(main())

193
scripts/publish_gitea_release.sh Executable file
View File

@@ -0,0 +1,193 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
ADDON_XML="${ROOT_DIR}/addon/addon.xml"
DEFAULT_NOTES="Automatischer Release-Upload aus ViewIT Build."
TAG=""
ASSET_PATH=""
TITLE=""
NOTES="${DEFAULT_NOTES}"
DRY_RUN="0"
while [[ $# -gt 0 ]]; do
case "$1" in
--tag)
TAG="${2:-}"
shift 2
;;
--asset)
ASSET_PATH="${2:-}"
shift 2
;;
--title)
TITLE="${2:-}"
shift 2
;;
--notes)
NOTES="${2:-}"
shift 2
;;
--dry-run)
DRY_RUN="1"
shift
;;
*)
echo "Unbekanntes Argument: $1" >&2
exit 1
;;
esac
done
if [[ ! -f "${ADDON_XML}" ]]; then
echo "Missing: ${ADDON_XML}" >&2
exit 1
fi
read -r ADDON_ID ADDON_VERSION < <(python3 - "${ADDON_XML}" <<'PY'
import sys
import xml.etree.ElementTree as ET
root = ET.parse(sys.argv[1]).getroot()
print(root.attrib.get("id", "plugin.video.viewit"), root.attrib.get("version", "0.0.0"))
PY
)
if [[ -z "${TAG}" ]]; then
TAG="v${ADDON_VERSION}"
fi
if [[ -z "${ASSET_PATH}" ]]; then
ASSET_PATH="${ROOT_DIR}/dist/${ADDON_ID}-${ADDON_VERSION}.zip"
fi
if [[ ! -f "${ASSET_PATH}" ]]; then
echo "Asset nicht gefunden, baue ZIP: ${ASSET_PATH}"
"${ROOT_DIR}/scripts/build_kodi_zip.sh" >/dev/null
fi
if [[ ! -f "${ASSET_PATH}" ]]; then
echo "Asset fehlt nach Build: ${ASSET_PATH}" >&2
exit 1
fi
if [[ -z "${TITLE}" ]]; then
TITLE="ViewIT ${TAG}"
fi
REMOTE_URL="$(git -C "${ROOT_DIR}" remote get-url origin)"
read -r BASE_URL OWNER REPO < <(python3 - "${REMOTE_URL}" <<'PY'
import re
import sys
u = sys.argv[1].strip()
m = re.match(r"^https?://([^/]+)/([^/]+)/([^/.]+)(?:\.git)?/?$", u)
if not m:
raise SystemExit("Origin-URL muss https://host/owner/repo(.git) sein.")
host, owner, repo = m.group(1), m.group(2), m.group(3)
print(f"https://{host}", owner, repo)
PY
)
API_BASE="${BASE_URL}/api/v1/repos/${OWNER}/${REPO}"
ASSET_NAME="$(basename "${ASSET_PATH}")"
if [[ "${DRY_RUN}" == "1" ]]; then
echo "[DRY-RUN] API: ${API_BASE}"
echo "[DRY-RUN] Tag: ${TAG}"
echo "[DRY-RUN] Asset: ${ASSET_PATH}"
exit 0
fi
if [[ -z "${GITEA_TOKEN:-}" ]]; then
echo "Bitte GITEA_TOKEN setzen." >&2
exit 1
fi
tmp_json="$(mktemp)"
tmp_http="$(mktemp)"
trap 'rm -f "${tmp_json}" "${tmp_http}"' EXIT
urlenc() {
python3 - "$1" <<'PY'
import sys
from urllib.parse import quote
print(quote(sys.argv[1], safe=""))
PY
}
tag_enc="$(urlenc "${TAG}")"
auth_header="Authorization: token ${GITEA_TOKEN}"
http_code="$(curl -sS -H "${auth_header}" -o "${tmp_json}" -w "%{http_code}" "${API_BASE}/releases/tags/${tag_enc}")"
if [[ "${http_code}" == "200" ]]; then
RELEASE_ID="$(python3 - "${tmp_json}" <<'PY'
import json,sys
print(json.load(open(sys.argv[1], encoding="utf-8"))["id"])
PY
)"
elif [[ "${http_code}" == "404" ]]; then
payload="$(python3 - "${TAG}" "${TITLE}" "${NOTES}" <<'PY'
import json,sys
print(json.dumps({
"tag_name": sys.argv[1],
"name": sys.argv[2],
"body": sys.argv[3],
"draft": False,
"prerelease": False
}))
PY
)"
http_code_create="$(curl -sS -X POST -H "${auth_header}" -H "Content-Type: application/json" -d "${payload}" -o "${tmp_json}" -w "%{http_code}" "${API_BASE}/releases")"
if [[ "${http_code_create}" != "201" ]]; then
echo "Release konnte nicht erstellt werden (HTTP ${http_code_create})." >&2
cat "${tmp_json}" >&2
exit 1
fi
RELEASE_ID="$(python3 - "${tmp_json}" <<'PY'
import json,sys
print(json.load(open(sys.argv[1], encoding="utf-8"))["id"])
PY
)"
else
echo "Release-Abfrage fehlgeschlagen (HTTP ${http_code})." >&2
cat "${tmp_json}" >&2
exit 1
fi
assets_code="$(curl -sS -H "${auth_header}" -o "${tmp_json}" -w "%{http_code}" "${API_BASE}/releases/${RELEASE_ID}/assets")"
if [[ "${assets_code}" == "200" ]]; then
EXISTING_ASSET_ID="$(python3 - "${tmp_json}" "${ASSET_NAME}" <<'PY'
import json,sys
assets=json.load(open(sys.argv[1], encoding="utf-8"))
name=sys.argv[2]
for a in assets:
if a.get("name")==name:
print(a.get("id"))
break
PY
)"
if [[ -n "${EXISTING_ASSET_ID}" ]]; then
del_code="$(curl -sS -X DELETE -H "${auth_header}" -o "${tmp_http}" -w "%{http_code}" "${API_BASE}/releases/${RELEASE_ID}/assets/${EXISTING_ASSET_ID}")"
if [[ "${del_code}" != "204" ]]; then
echo "Altes Asset konnte nicht geloescht werden (HTTP ${del_code})." >&2
cat "${tmp_http}" >&2
exit 1
fi
fi
fi
asset_name_enc="$(urlenc "${ASSET_NAME}")"
upload_code="$(curl -sS -X POST -H "${auth_header}" -F "attachment=@${ASSET_PATH}" -o "${tmp_json}" -w "%{http_code}" "${API_BASE}/releases/${RELEASE_ID}/assets?name=${asset_name_enc}")"
if [[ "${upload_code}" != "201" ]]; then
echo "Asset-Upload fehlgeschlagen (HTTP ${upload_code})." >&2
cat "${tmp_json}" >&2
exit 1
fi
echo "Release-Asset hochgeladen:"
echo " Repo: ${OWNER}/${REPO}"
echo " Tag: ${TAG}"
echo " Asset: ${ASSET_NAME}"
echo " URL: ${BASE_URL}/${OWNER}/${REPO}/releases/tag/${TAG}"

View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
DIST_DIR="${ROOT_DIR}/dist"
HOST="${HOST:-127.0.0.1}"
PORT="${PORT:-8080}"
if [[ ! -f "${DIST_DIR}/repo/addons.xml" ]]; then
echo "Missing ${DIST_DIR}/repo/addons.xml" >&2
echo "Run ./scripts/build_local_kodi_repo.sh first." >&2
exit 1
fi
echo "Serving local Kodi repo from ${DIST_DIR}"
echo "Repository URL: http://${HOST}:${PORT}/repo/addons.xml"
(cd "${DIST_DIR}" && python3 -m http.server "${PORT}" --bind "${HOST}")

73
scripts/zip_deterministic.py Executable file
View File

@@ -0,0 +1,73 @@
#!/usr/bin/env python3
"""Create deterministic zip archives.
Usage:
zip_deterministic.py <zip_path> <root_dir>
The archive will include the root directory itself and all files under it.
"""
from __future__ import annotations
import os
import sys
import time
import zipfile
from pathlib import Path
def _timestamp() -> tuple[int, int, int, int, int, int]:
epoch = os.environ.get("SOURCE_DATE_EPOCH")
if epoch:
try:
value = int(epoch)
return time.gmtime(value)[:6]
except Exception:
pass
return (2000, 1, 1, 0, 0, 0)
def _iter_files(root: Path):
for dirpath, dirnames, filenames in os.walk(root):
dirnames[:] = sorted([d for d in dirnames if d != "__pycache__"])
for filename in sorted(filenames):
if filename.endswith(".pyc"):
continue
yield Path(dirpath) / filename
def _add_file(zf: zipfile.ZipFile, file_path: Path, arcname: str) -> None:
info = zipfile.ZipInfo(arcname, date_time=_timestamp())
info.compress_type = zipfile.ZIP_DEFLATED
info.external_attr = (0o644 & 0xFFFF) << 16
with file_path.open("rb") as handle:
data = handle.read()
zf.writestr(info, data, compress_type=zipfile.ZIP_DEFLATED)
def main() -> int:
if len(sys.argv) != 3:
print("Usage: zip_deterministic.py <zip_path> <root_dir>")
return 2
zip_path = Path(sys.argv[1]).resolve()
root = Path(sys.argv[2]).resolve()
if not root.exists() or not root.is_dir():
print(f"Missing root dir: {root}")
return 2
base = root.parent
zip_path.parent.mkdir(parents=True, exist_ok=True)
if zip_path.exists():
zip_path.unlink()
with zipfile.ZipFile(zip_path, "w") as zf:
for file_path in sorted(_iter_files(root)):
arcname = str(file_path.relative_to(base)).replace(os.sep, "/")
_add_file(zf, file_path, arcname)
print(str(zip_path))
return 0
if __name__ == "__main__":
raise SystemExit(main())