Compare commits
21 Commits
v0.1.76.5-
...
v0.1.87.0-
| Author | SHA1 | Date | |
|---|---|---|---|
| e8fa00ff7b | |||
| ab05040335 | |||
| b0706befdb | |||
| 30bafd269c | |||
| 82c90110e4 | |||
| e5d93e3af6 | |||
| 938f6c0e3d | |||
| 4e9ae348b9 | |||
| 979ad4f8f5 | |||
| 24df7254b7 | |||
| 1754013d82 | |||
| ea9ceec34c | |||
| d51505e004 | |||
| 4b9ba6a01a | |||
| 811f617ff7 | |||
| e4828dedd0 | |||
| 1969c21c11 | |||
| f8e59acd94 | |||
| 0a161fd8c6 | |||
| caa4a4a0e2 | |||
| 5ccda44623 |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -19,3 +19,7 @@ __pycache__/
|
||||
|
||||
# Plugin runtime caches
|
||||
/addon/plugins/*_cache.json
|
||||
|
||||
# Projektdokumentation (lokal)
|
||||
/PROJECT_INDEX.md
|
||||
/FUNCTION_MAP.md
|
||||
|
||||
@@ -1,3 +1,87 @@
|
||||
## 0.1.86.5-dev - 2026-04-03
|
||||
|
||||
- dev: bump to 0.1.86.0-dev Globale Suche konfigurierbar, Changelog-Dialog beim ersten Start
|
||||
|
||||
## 0.1.86.0-dev - 2026-04-02
|
||||
|
||||
- dev: bump to 0.1.85.5-dev Settings-Menü benutzerfreundlicher gestaltet
|
||||
|
||||
## 0.1.85.5-dev - 2026-04-02
|
||||
|
||||
- dev: bump to 0.1.85.0-dev settings.xml und default.py auf 0.1.84.5-Stand zurueckgesetzt, serienstream_plugin.py aktuell behalten
|
||||
|
||||
## 0.1.85.0-dev - 2026-04-01
|
||||
|
||||
- dev: bump to 0.1.84.5-dev settings.xml auf Kodi 19+ Format (version=1) migriert, Level-Filter fuer Expert/Advanced korrigiert
|
||||
|
||||
## 0.1.84.5-dev - 2026-03-31
|
||||
|
||||
- dev: bump to 0.1.84.0-dev SerienStream Sammlungen mit Poster/Plot, Session-Cache für Sammlungs-URLs
|
||||
|
||||
## 0.1.84.0-dev - 2026-03-16
|
||||
|
||||
- dev: bump to 0.1.83.5-dev Trakt Weiterschauen via watched/shows, Specials überspringen
|
||||
|
||||
## 0.1.83.5-dev - 2026-03-15
|
||||
|
||||
- dev: SerienStream Suche via /suche?term=, Staffel 0 als Filme, Katalog-Suche entfernt
|
||||
|
||||
## 0.1.83.0-dev - 2026-03-15
|
||||
|
||||
- dev: Trakt Performance, Suchfilter Phrase-Match, Debug-Settings Expert-Level
|
||||
|
||||
## 0.1.82.5-dev - 2026-03-15
|
||||
|
||||
- dev: Update-Versionsvergleich numerisch korrigiert
|
||||
|
||||
## 0.1.82.0-dev - 2026-03-14
|
||||
|
||||
- dev: HDFilme Plot in Rubrik Neuste anzeigen
|
||||
|
||||
## 0.1.81.5-dev - 2026-03-14
|
||||
|
||||
- dev: YouTube HD via inputstream.adaptive, DokuStreams Suche fix
|
||||
|
||||
## 0.1.81.0-dev - 2026-03-14
|
||||
|
||||
- dev: YouTube Fixes, Trakt Credentials fest, Upcoming Ansicht, Watchlist Kontextmenue
|
||||
|
||||
## 0.1.80.5-dev - 2026-03-13
|
||||
|
||||
- dev: YouTube: yt-dlp ZIP-Installation von GitHub, kein yesno-Dialog
|
||||
|
||||
## 0.1.80.0-dev - 2026-03-13
|
||||
|
||||
- dev: YouTube-Plugin: yt-dlp Suche, Bug-Fix Any-Import
|
||||
|
||||
## 0.1.79.5-dev - 2026-03-11
|
||||
|
||||
- dev: Changelog-Hook auf prepare-commit-msg umgestellt
|
||||
|
||||
## 0.1.79.0-dev - 2026-03-11
|
||||
|
||||
- dev: TMDB API-Key automatisch aus Kodi-Scraper ermitteln
|
||||
|
||||
## 0.1.78.5-dev - 2026-03-11
|
||||
|
||||
- dev: Uhrzeit aus Episodentitel entfernen, tvshow-Mediatype fix
|
||||
|
||||
## 0.1.78.0-dev - 2026-03-11
|
||||
|
||||
- dev: Trakt-Scrobbling fuer alle Wiedergabe-Pfade
|
||||
|
||||
## 0.1.77.5-dev - 2026-03-10
|
||||
|
||||
- dev: Max. Eintraege pro Seite Setting pro Plugin
|
||||
|
||||
## 0.1.77.0-dev - 2026-03-10
|
||||
|
||||
- dev: Changelog-Dialog nur anzeigen wenn Eintrag vorhanden
|
||||
|
||||
## 0.1.76.5-dev - 2026-03-10
|
||||
|
||||
- dev: Versionsfilter fuer 4-teilige Versionsnummern korrigiert
|
||||
|
||||
## 0.1.76.0-dev - 2026-03-10
|
||||
|
||||
- dev: bump to 0.1.76.0-dev – aeltere Versionen im Update-Dialog, Release-Branch-Zuordnung, README ueberarbeitet
|
||||
|
||||
39
addon/CHANGELOG-USER.md
Normal file
39
addon/CHANGELOG-USER.md
Normal file
@@ -0,0 +1,39 @@
|
||||
## 0.1.86.5
|
||||
|
||||
**Trakt**
|
||||
- Alle Trakt-Funktionen sind jetzt unter einem eigenen Untermenüpunkt „Trakt" gebündelt
|
||||
- Weiterschauen erkennt Staffelwechsel korrekt (z.B. S02E12 → S03E01 statt S02E13)
|
||||
- Scrobbling zuverlässiger: Episoden werden nicht mehr fälschlicherweise als gesehen markiert wenn die Streamlänge unbekannt ist
|
||||
- Trakt-IDs werden jetzt auch gefunden wenn TMDB die Serie nicht kennt
|
||||
|
||||
**Globale Suche**
|
||||
- Jedes Such-Plugin in den Einstellungen unter „Globale Suche" einzeln aktivierbar
|
||||
- YouTube optional als Suchquelle wählbar
|
||||
- Suchergebnisse zeigen den Anbieter in eckigen Klammern an, z.B. „Breaking Bad [Serienstream]"
|
||||
|
||||
**Neu beim Start**
|
||||
- Nach einem Update wird automatisch ein Changelog-Dialog mit den Neuigkeiten angezeigt
|
||||
|
||||
## 0.1.86.0
|
||||
|
||||
**Globale Suche verbessert**
|
||||
- Jedes Such-Plugin kann in den Einstellungen unter „Globale Suche" einzeln aktiviert oder deaktiviert werden
|
||||
- YouTube ist nun ebenfalls als optionale Suchquelle wählbar
|
||||
- Suchergebnisse zeigen jetzt den Anbieter in eckigen Klammern an, z.B. „Breaking Bad [Serienstream]"
|
||||
|
||||
**Einstellungsmenü übersichtlicher**
|
||||
- Neue Kategorie „Globale Suche" mit Checkboxen für alle Anbieter
|
||||
- Trakt-Verbindungsstatus direkt in den Einstellungen sichtbar
|
||||
- Bevorzugter Hoster als Auswahlliste, nur aktiv wenn Autoplay eingeschaltet ist
|
||||
|
||||
## 0.1.84.0
|
||||
|
||||
**Trakt Weiterschauen**
|
||||
- Serien, die du bereits angeschaut hast, werden über Trakt als „Weiterschauen"-Liste angezeigt
|
||||
- Specials (Staffel 0) werden dabei automatisch übersprungen
|
||||
|
||||
## 0.1.82.0
|
||||
|
||||
**SerienStream Sammlungen**
|
||||
- Sammlungen auf SerienStream sind jetzt durchsuchbar
|
||||
- Die Liste wird alphabetisch sortiert und Sonderzeichen in Titeln werden bereinigt
|
||||
@@ -1,11 +1,12 @@
|
||||
<?xml version='1.0' encoding='utf-8'?>
|
||||
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.76.5-dev" provider-name="ViewIt">
|
||||
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.87.0-dev" provider-name="ViewIt">
|
||||
<requires>
|
||||
<import addon="xbmc.python" version="3.0.0" />
|
||||
<import addon="script.module.requests" />
|
||||
<import addon="script.module.beautifulsoup4" />
|
||||
<import addon="script.module.resolveurl" version="5.1.0" />
|
||||
<import addon="script.trakt" optional="true" />
|
||||
<import addon="script.module.yt-dlp" optional="true" />
|
||||
</requires>
|
||||
<extension point="xbmc.python.pluginsource" library="default.py">
|
||||
<provides>video</provides>
|
||||
|
||||
@@ -52,6 +52,24 @@ def _require_init() -> None:
|
||||
print("[ViewIT/metadata] WARNUNG: metadata.init() wurde nicht aufgerufen – Metadaten-Funktionen arbeiten mit Standardwerten!", file=sys.stderr)
|
||||
|
||||
|
||||
def _resolve_tmdb_api_key(user_key: str) -> str:
|
||||
"""Key aus ViewIT-Settings, installiertem Kodi-Scraper oder Community-Fallback."""
|
||||
if user_key:
|
||||
return user_key
|
||||
if xbmcaddon:
|
||||
for addon_id in (
|
||||
"metadata.tvshows.themoviedb.org.python",
|
||||
"metadata.themoviedb.org.python",
|
||||
):
|
||||
try:
|
||||
key = xbmcaddon.Addon(addon_id).getSetting("tmdb_apikey")
|
||||
if key:
|
||||
return key
|
||||
except RuntimeError:
|
||||
pass
|
||||
return "80246691939720672db3fc71c74e0ef2"
|
||||
|
||||
|
||||
def init(
|
||||
*,
|
||||
get_setting_string: Callable[[str], str],
|
||||
@@ -153,7 +171,7 @@ def tmdb_labels_and_art(title: str) -> tuple[dict[str, str], dict[str, str], lis
|
||||
art: dict[str, str] = {}
|
||||
cast: list[TmdbCastMember] = []
|
||||
query = (title or "").strip()
|
||||
api_key = _get_setting_string("tmdb_api_key").strip()
|
||||
api_key = _resolve_tmdb_api_key(_get_setting_string("tmdb_api_key").strip())
|
||||
log_requests = _get_setting_bool("tmdb_log_requests", default=False)
|
||||
log_responses = _get_setting_bool("tmdb_log_responses", default=False)
|
||||
if api_key:
|
||||
@@ -295,7 +313,7 @@ def tmdb_episode_labels_and_art(*, title: str, season_label: str, episode_label:
|
||||
season_key = (tmdb_id, season_number, language, flags)
|
||||
cached_season = tmdb_cache_get(_TMDB_SEASON_CACHE, season_key)
|
||||
if cached_season is None:
|
||||
api_key = _get_setting_string("tmdb_api_key").strip()
|
||||
api_key = _resolve_tmdb_api_key(_get_setting_string("tmdb_api_key").strip())
|
||||
if not api_key:
|
||||
return {"title": episode_label}, {}
|
||||
log_requests = _get_setting_bool("tmdb_log_requests", default=False)
|
||||
@@ -358,7 +376,7 @@ def tmdb_episode_cast(*, title: str, season_label: str, episode_label: str) -> l
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
|
||||
api_key = _get_setting_string("tmdb_api_key").strip()
|
||||
api_key = _resolve_tmdb_api_key(_get_setting_string("tmdb_api_key").strip())
|
||||
if not api_key:
|
||||
tmdb_cache_set(_TMDB_EPISODE_CAST_CACHE, cache_key, [])
|
||||
return []
|
||||
@@ -398,7 +416,7 @@ def tmdb_season_labels_and_art(
|
||||
show_plot = _get_setting_bool("tmdb_show_plot", default=True)
|
||||
show_art = _get_setting_bool("tmdb_show_art", default=True)
|
||||
flags = f"p{int(show_plot)}a{int(show_art)}"
|
||||
api_key = _get_setting_string("tmdb_api_key").strip()
|
||||
api_key = _resolve_tmdb_api_key(_get_setting_string("tmdb_api_key").strip())
|
||||
log_requests = _get_setting_bool("tmdb_log_requests", default=False)
|
||||
log_responses = _get_setting_bool("tmdb_log_responses", default=False)
|
||||
log_fn = tmdb_file_log if (log_requests or log_responses) else None
|
||||
|
||||
@@ -65,6 +65,8 @@ class TraktItem:
|
||||
episode_thumb: str = "" # Screenshot-URL (extended=images)
|
||||
show_poster: str = "" # Serien-Poster-URL (extended=images)
|
||||
show_fanart: str = "" # Serien-Fanart-URL (extended=images)
|
||||
# Staffel → höchste gesehene Episodennummer (für Staffelwechsel-Erkennung)
|
||||
seasons_watched: dict = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
@@ -370,6 +372,47 @@ class TraktClient:
|
||||
return []
|
||||
return self._parse_history_items(payload)
|
||||
|
||||
def get_watched_shows(self, token: str) -> list[TraktItem]:
|
||||
"""GET /users/me/watched/shows – alle Serien mit zuletzt gesehener Episode."""
|
||||
status, payload = self._get("/users/me/watched/shows", token=token)
|
||||
if status != 200 or not isinstance(payload, list):
|
||||
self._do_log(f"get_watched_shows: status={status}")
|
||||
return []
|
||||
result: list[TraktItem] = []
|
||||
for entry in payload:
|
||||
if not isinstance(entry, dict):
|
||||
continue
|
||||
show = entry.get("show") or {}
|
||||
ids = self._parse_ids((show.get("ids") or {}))
|
||||
title = str(show.get("title", "") or "")
|
||||
year = int(show.get("year", 0) or 0)
|
||||
seasons = entry.get("seasons") or []
|
||||
last_season = 0
|
||||
last_episode = 0
|
||||
seasons_watched: dict[int, int] = {}
|
||||
for s in seasons:
|
||||
snum = int((s.get("number") or 0))
|
||||
if snum == 0: # Specials überspringen
|
||||
continue
|
||||
max_ep = 0
|
||||
for ep in (s.get("episodes") or []):
|
||||
enum = int((ep.get("number") or 0))
|
||||
if enum > max_ep:
|
||||
max_ep = enum
|
||||
if snum > last_season or (snum == last_season and enum > last_episode):
|
||||
last_season = snum
|
||||
last_episode = enum
|
||||
if max_ep > 0:
|
||||
seasons_watched[snum] = max_ep
|
||||
if title:
|
||||
result.append(TraktItem(
|
||||
title=title, year=year, media_type="episode",
|
||||
ids=ids, season=last_season, episode=last_episode,
|
||||
seasons_watched=seasons_watched,
|
||||
))
|
||||
self._do_log(f"get_watched_shows: {len(result)} Serien")
|
||||
return result
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Calendar
|
||||
# -------------------------------------------------------------------
|
||||
@@ -427,6 +470,21 @@ class TraktClient:
|
||||
ids = show.get("ids") or {}
|
||||
return str(ids.get("slug") or ids.get("trakt") or "")
|
||||
|
||||
def search_show_ids(self, query: str) -> "tuple[int, str]":
|
||||
"""GET /search/show?query=... – gibt (tmdb_id, imdb_id) des ersten Treffers zurück.
|
||||
Fallback wenn TMDB keine IDs liefert.
|
||||
"""
|
||||
from urllib.parse import urlencode
|
||||
path = f"/search/show?{urlencode({'query': query, 'limit': 1})}"
|
||||
status, payload = self._get(path)
|
||||
if status != 200 or not isinstance(payload, list) or not payload:
|
||||
return 0, ""
|
||||
show = (payload[0] or {}).get("show") or {}
|
||||
ids = show.get("ids") or {}
|
||||
tmdb_id = int(ids.get("tmdb") or 0)
|
||||
imdb_id = str(ids.get("imdb") or "")
|
||||
return tmdb_id, imdb_id
|
||||
|
||||
def lookup_tv_season(
|
||||
self,
|
||||
show_id_or_slug: "str | int",
|
||||
|
||||
874
addon/default.py
874
addon/default.py
File diff suppressed because it is too large
Load Diff
@@ -286,7 +286,7 @@ class DokuStreamsPlugin(BasisPlugin):
|
||||
soup = _get_soup(search_url, session=session)
|
||||
except Exception:
|
||||
return []
|
||||
return _parse_listing_hits(soup, query=query)
|
||||
return _parse_listing_hits(soup)
|
||||
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"genres", "popular_series", "tags", "random"}
|
||||
@@ -455,15 +455,24 @@ class DokuStreamsPlugin(BasisPlugin):
|
||||
art = {"thumb": poster, "poster": poster}
|
||||
return info, art, None
|
||||
|
||||
def series_url_for_title(self, title: str) -> Optional[str]:
|
||||
return self._title_to_url.get((title or "").strip())
|
||||
|
||||
def remember_series_url(self, title: str, url: str) -> None:
|
||||
title = (title or "").strip()
|
||||
url = (url or "").strip()
|
||||
if title and url:
|
||||
self._title_to_url[title] = url
|
||||
|
||||
def seasons_for(self, title: str) -> List[str]:
|
||||
title = (title or "").strip()
|
||||
if not title or title not in self._title_to_url:
|
||||
if not title:
|
||||
return []
|
||||
return ["Stream"]
|
||||
|
||||
def episodes_for(self, title: str, season: str) -> List[str]:
|
||||
title = (title or "").strip()
|
||||
if not title or title not in self._title_to_url:
|
||||
if not title:
|
||||
return []
|
||||
return [title]
|
||||
|
||||
@@ -537,6 +546,14 @@ class DokuStreamsPlugin(BasisPlugin):
|
||||
"""Folgt Redirects und versucht ResolveURL fuer Hoster-Links."""
|
||||
if not link:
|
||||
return None
|
||||
# YouTube-URLs via yt-dlp aufloesen
|
||||
from ytdlp_helper import extract_youtube_id, resolve_youtube_url
|
||||
yt_id = extract_youtube_id(link)
|
||||
if yt_id:
|
||||
resolved = resolve_youtube_url(yt_id)
|
||||
if resolved:
|
||||
return resolved
|
||||
return None
|
||||
from plugin_helpers import resolve_via_resolveurl
|
||||
resolved = resolve_via_resolveurl(link, fallback_to_link=False)
|
||||
if resolved:
|
||||
|
||||
@@ -388,7 +388,7 @@ class HdfilmePlugin(BasisPlugin):
|
||||
info: dict[str, str] = {"title": title}
|
||||
art: dict[str, str] = {}
|
||||
|
||||
# Cache-Hit
|
||||
# Cache-Hit – nur zurückgeben wenn Plot vorhanden (sonst Detailseite laden)
|
||||
cached = self._title_meta.get(title)
|
||||
if cached:
|
||||
plot, poster = cached
|
||||
@@ -396,7 +396,7 @@ class HdfilmePlugin(BasisPlugin):
|
||||
info["plot"] = plot
|
||||
if poster:
|
||||
art["thumb"] = art["poster"] = poster
|
||||
if info or art:
|
||||
if plot:
|
||||
return info, art, None
|
||||
|
||||
# Detailseite laden
|
||||
|
||||
@@ -57,7 +57,6 @@ else: # pragma: no cover
|
||||
|
||||
|
||||
SETTING_BASE_URL = "serienstream_base_url"
|
||||
SETTING_CATALOG_SEARCH = "serienstream_catalog_search"
|
||||
DEFAULT_BASE_URL = "https://s.to"
|
||||
DEFAULT_PREFERRED_HOSTERS = ["voe"]
|
||||
DEFAULT_TIMEOUT = 20
|
||||
@@ -80,10 +79,7 @@ HEADERS = {
|
||||
SESSION_CACHE_TTL_SECONDS = 300
|
||||
SESSION_CACHE_PREFIX = "viewit.serienstream"
|
||||
SESSION_CACHE_MAX_TITLE_URLS = 800
|
||||
CATALOG_SEARCH_TTL_SECONDS = 600
|
||||
CATALOG_SEARCH_CACHE_KEY = "catalog_index"
|
||||
GENRE_LIST_PAGE_SIZE = 20
|
||||
_CATALOG_INDEX_MEMORY: tuple[float, list["SeriesResult"]] = (0.0, [])
|
||||
ProgressCallback = Optional[Callable[[str, int | None], Any]]
|
||||
|
||||
|
||||
@@ -509,6 +505,14 @@ def _strip_tags(value: str) -> str:
|
||||
return re.sub(r"<[^>]+>", " ", value or "")
|
||||
|
||||
|
||||
def _clean_collection_title(title: str) -> str:
|
||||
cleaned = "".join(
|
||||
ch for ch in title
|
||||
if unicodedata.category(ch) not in ("So", "Sm", "Sk", "Sc", "Cs", "Co", "Cn")
|
||||
)
|
||||
return re.sub(r"\s+", " ", cleaned).strip()
|
||||
|
||||
|
||||
def _search_series_api(query: str) -> list[SeriesResult]:
|
||||
query = (query or "").strip()
|
||||
if not query:
|
||||
@@ -575,8 +579,8 @@ def _search_series_server(query: str) -> list[SeriesResult]:
|
||||
if not query:
|
||||
return []
|
||||
base = _get_base_url()
|
||||
search_url = f"{base}/search?q={quote(query)}"
|
||||
alt_url = f"{base}/suche?q={quote(query)}"
|
||||
search_url = f"{base}/suche?term={quote(query)}"
|
||||
alt_url = f"{base}/search?term={quote(query)}"
|
||||
for url in (search_url, alt_url):
|
||||
try:
|
||||
body = _get_html_simple(url)
|
||||
@@ -606,158 +610,30 @@ def _search_series_server(query: str) -> list[SeriesResult]:
|
||||
continue
|
||||
seen_urls.add(url_abs)
|
||||
results.append(SeriesResult(title=title, description="", url=url_abs))
|
||||
filtered = [r for r in results if _matches_query(query, title=r.title)]
|
||||
if filtered:
|
||||
return filtered
|
||||
if results:
|
||||
return results
|
||||
api_results = _search_series_api(query)
|
||||
if api_results:
|
||||
return api_results
|
||||
return []
|
||||
|
||||
|
||||
def _extract_catalog_index_from_html(body: str, *, progress_callback: ProgressCallback = None) -> list[SeriesResult]:
|
||||
items: list[SeriesResult] = []
|
||||
if not body:
|
||||
return items
|
||||
seen_urls: set[str] = set()
|
||||
item_re = re.compile(
|
||||
r"<li[^>]*class=[\"'][^\"']*series-item[^\"']*[\"'][^>]*>(.*?)</li>",
|
||||
re.IGNORECASE | re.DOTALL,
|
||||
)
|
||||
anchor_re = re.compile(r"<a[^>]+href=[\"']([^\"']+)[\"'][^>]*>(.*?)</a>", re.IGNORECASE | re.DOTALL)
|
||||
data_search_re = re.compile(r"data-search=[\"']([^\"']*)[\"']", re.IGNORECASE)
|
||||
for idx, match in enumerate(item_re.finditer(body), start=1):
|
||||
if idx == 1 or idx % 200 == 0:
|
||||
_emit_progress(progress_callback, f"Katalog parsen {idx}", 62)
|
||||
block = match.group(0)
|
||||
inner = match.group(1) or ""
|
||||
anchor_match = anchor_re.search(inner)
|
||||
if not anchor_match:
|
||||
continue
|
||||
href = (anchor_match.group(1) or "").strip()
|
||||
url = _absolute_url(href)
|
||||
if not url or "/serie/" not in url or "/staffel-" in url or "/episode-" in url:
|
||||
continue
|
||||
if url in seen_urls:
|
||||
continue
|
||||
seen_urls.add(url)
|
||||
title_raw = anchor_match.group(2) or ""
|
||||
title = unescape(re.sub(r"\s+", " ", _strip_tags(title_raw))).strip()
|
||||
if not title:
|
||||
continue
|
||||
search_match = data_search_re.search(block)
|
||||
description = (search_match.group(1) or "").strip() if search_match else ""
|
||||
items.append(SeriesResult(title=title, description=description, url=url))
|
||||
return items
|
||||
|
||||
|
||||
def _catalog_index_from_soup(soup: BeautifulSoupT) -> list[SeriesResult]:
|
||||
items: list[SeriesResult] = []
|
||||
if not soup:
|
||||
return items
|
||||
seen_urls: set[str] = set()
|
||||
for item in soup.select("li.series-item"):
|
||||
anchor = item.find("a", href=True)
|
||||
if not anchor:
|
||||
continue
|
||||
href = (anchor.get("href") or "").strip()
|
||||
url = _absolute_url(href)
|
||||
if not url or "/serie/" not in url or "/staffel-" in url or "/episode-" in url:
|
||||
continue
|
||||
if url in seen_urls:
|
||||
continue
|
||||
seen_urls.add(url)
|
||||
title = (anchor.get_text(" ", strip=True) or "").strip()
|
||||
if not title:
|
||||
continue
|
||||
description = (item.get("data-search") or "").strip()
|
||||
items.append(SeriesResult(title=title, description=description, url=url))
|
||||
return items
|
||||
|
||||
|
||||
def _load_catalog_index_from_cache() -> Optional[list[SeriesResult]]:
|
||||
global _CATALOG_INDEX_MEMORY
|
||||
expires_at, cached = _CATALOG_INDEX_MEMORY
|
||||
if cached and expires_at > time.time():
|
||||
return list(cached)
|
||||
raw = _session_cache_get(CATALOG_SEARCH_CACHE_KEY)
|
||||
if not isinstance(raw, list):
|
||||
return None
|
||||
items: list[SeriesResult] = []
|
||||
for entry in raw:
|
||||
if not isinstance(entry, list) or len(entry) < 2:
|
||||
continue
|
||||
title = str(entry[0] or "").strip()
|
||||
url = str(entry[1] or "").strip()
|
||||
description = str(entry[2] or "") if len(entry) > 2 else ""
|
||||
cover = str(entry[3] or "").strip() if len(entry) > 3 else ""
|
||||
if title and url:
|
||||
items.append(SeriesResult(title=title, description=description, url=url, cover=cover))
|
||||
if items:
|
||||
_CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items))
|
||||
return items or None
|
||||
|
||||
|
||||
def _store_catalog_index_in_cache(items: list[SeriesResult]) -> None:
|
||||
global _CATALOG_INDEX_MEMORY
|
||||
if not items:
|
||||
return
|
||||
_CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items))
|
||||
payload: list[list[str]] = []
|
||||
for entry in items:
|
||||
if not entry.title or not entry.url:
|
||||
continue
|
||||
payload.append([entry.title, entry.url, entry.description, entry.cover])
|
||||
_session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS)
|
||||
|
||||
|
||||
def search_series(query: str, *, progress_callback: ProgressCallback = None) -> list[SeriesResult]:
|
||||
"""Sucht Serien. Katalog-Suche (vollstaendig) oder API-Suche (max 10) je nach Setting."""
|
||||
"""Sucht Serien. Server-Suche (/suche?term=) zuerst, API als Fallback."""
|
||||
_ensure_requests()
|
||||
if not _normalize_search_text(query):
|
||||
return []
|
||||
|
||||
use_catalog = _get_setting_bool(SETTING_CATALOG_SEARCH, default=True)
|
||||
|
||||
if use_catalog:
|
||||
_emit_progress(progress_callback, "Pruefe Such-Cache", 15)
|
||||
cached = _load_catalog_index_from_cache()
|
||||
if cached is not None:
|
||||
matched_from_cache = [entry for entry in cached if entry.title and _matches_query(query, title=entry.title)]
|
||||
_emit_progress(progress_callback, f"Cache-Treffer: {len(cached)}", 35)
|
||||
if matched_from_cache:
|
||||
return matched_from_cache
|
||||
|
||||
_emit_progress(progress_callback, "Lade Katalogseite", 42)
|
||||
catalog_url = f"{_get_base_url()}/serien?by=genre"
|
||||
items: list[SeriesResult] = []
|
||||
try:
|
||||
soup = _get_soup_simple(catalog_url)
|
||||
items = _catalog_index_from_soup(soup)
|
||||
except Exception:
|
||||
body = _get_html_simple(catalog_url)
|
||||
items = _extract_catalog_index_from_html(body, progress_callback=progress_callback)
|
||||
if not items:
|
||||
_emit_progress(progress_callback, "Fallback-Parser", 58)
|
||||
soup = BeautifulSoup(body, "html.parser")
|
||||
items = _catalog_index_from_soup(soup)
|
||||
if items:
|
||||
_store_catalog_index_in_cache(items)
|
||||
_emit_progress(progress_callback, f"Filtere Treffer ({len(items)})", 70)
|
||||
return [entry for entry in items if entry.title and _matches_query(query, title=entry.title)]
|
||||
|
||||
# API-Suche (primaer wenn Katalog deaktiviert, Fallback wenn Katalog leer)
|
||||
_emit_progress(progress_callback, "API-Suche", 60)
|
||||
api_results = _search_series_api(query)
|
||||
if api_results:
|
||||
_emit_progress(progress_callback, f"API-Treffer: {len(api_results)}", 80)
|
||||
return api_results
|
||||
|
||||
_emit_progress(progress_callback, "Server-Suche", 85)
|
||||
# 1. Server-Suche (schnell, vollstaendig, direkte HTML-Suche)
|
||||
_emit_progress(progress_callback, "Suche", 20)
|
||||
server_results = _search_series_server(query)
|
||||
if server_results:
|
||||
_emit_progress(progress_callback, f"Server-Treffer: {len(server_results)}", 95)
|
||||
return [entry for entry in server_results if entry.title and _matches_query(query, title=entry.title)]
|
||||
return []
|
||||
return server_results
|
||||
|
||||
# 2. API-Suche (Fallback, max 10 Ergebnisse)
|
||||
_emit_progress(progress_callback, "API-Suche", 60)
|
||||
return _search_series_api(query)
|
||||
|
||||
|
||||
def parse_series_catalog(soup: BeautifulSoupT) -> dict[str, list[SeriesResult]]:
|
||||
@@ -1159,6 +1035,8 @@ class SerienstreamPlugin(BasisPlugin):
|
||||
self._latest_hoster_cache: dict[str, list[str]] = {}
|
||||
self._series_metadata_cache: dict[str, tuple[dict[str, str], dict[str, str]]] = {}
|
||||
self._series_metadata_full: set[str] = set()
|
||||
self._collection_url_cache: dict[str, str] = {}
|
||||
self._collection_has_more: bool = False
|
||||
self.is_available = True
|
||||
self.unavailable_reason: str | None = None
|
||||
if not self._requests_available: # pragma: no cover - optional dependency
|
||||
@@ -1252,7 +1130,7 @@ class SerienstreamPlugin(BasisPlugin):
|
||||
except Exception:
|
||||
continue
|
||||
url = str(item.get("url") or "").strip()
|
||||
if number <= 0 or not url:
|
||||
if number < 0 or not url:
|
||||
continue
|
||||
seasons.append(SeasonInfo(number=number, url=url, episodes=[]))
|
||||
if not seasons:
|
||||
@@ -1383,7 +1261,63 @@ class SerienstreamPlugin(BasisPlugin):
|
||||
|
||||
def capabilities(self) -> set[str]:
|
||||
"""Meldet unterstützte Features für Router-Menüs."""
|
||||
return {"popular_series", "genres", "latest_episodes", "alpha"}
|
||||
return {"popular_series", "genres", "latest_episodes", "alpha", "collections"}
|
||||
|
||||
def collections(self) -> list[str]:
|
||||
"""Liefert Sammlungs-Namen von /sammlungen (Seite 1, für Paginierung)."""
|
||||
return self._collections_page(1)
|
||||
|
||||
def _collections_page(self, page: int = 1) -> list[str]:
|
||||
"""Liefert eine Seite mit Sammlungs-Namen von /sammlungen (paginiert)."""
|
||||
if not self._requests_available:
|
||||
return []
|
||||
base = _get_base_url()
|
||||
names: list[str] = []
|
||||
url_map: dict[str, str] = {}
|
||||
url = f"{base}/sammlungen" if page == 1 else f"{base}/sammlungen?page={page}"
|
||||
soup = _get_soup_simple(url)
|
||||
for a in soup.select('a[href*="/sammlung/"]'):
|
||||
h2 = a.find("h2")
|
||||
if not h2:
|
||||
continue
|
||||
title = _clean_collection_title(h2.get_text(strip=True))
|
||||
href = (a.get("href") or "").strip()
|
||||
if title and href:
|
||||
url_map[title] = _absolute_url(href)
|
||||
names.append(title)
|
||||
if url_map:
|
||||
existing = _session_cache_get("collection_urls")
|
||||
if isinstance(existing, dict):
|
||||
existing.update(url_map)
|
||||
_session_cache_set("collection_urls", existing)
|
||||
else:
|
||||
_session_cache_set("collection_urls", url_map)
|
||||
names.sort(key=lambda t: t.casefold())
|
||||
return names
|
||||
|
||||
def titles_for_collection(self, collection: str, page: int = 1) -> list[str]:
|
||||
"""Liefert Serien-Titel einer Sammlung (paginiert)."""
|
||||
if not self._requests_available:
|
||||
return []
|
||||
url_map = _session_cache_get("collection_urls")
|
||||
if isinstance(url_map, dict):
|
||||
self._collection_url_cache.update(url_map)
|
||||
url = self._collection_url_cache.get(collection)
|
||||
if not url:
|
||||
return []
|
||||
if page > 1:
|
||||
url = f"{url}?page={page}"
|
||||
base_url = self._collection_url_cache[collection]
|
||||
soup = _get_soup_simple(url)
|
||||
titles: list[str] = []
|
||||
for a in soup.select('h6 a[href*="/serie/"]'):
|
||||
title = a.get_text(strip=True)
|
||||
href = (a.get("href") or "").strip()
|
||||
if title and href:
|
||||
self._remember_series_result(title, _absolute_url(href), "")
|
||||
titles.append(title)
|
||||
self._collection_has_more = bool(soup.select(f'a[href*="?page={page + 1}"]'))
|
||||
return titles
|
||||
|
||||
def popular_series(self) -> list[str]:
|
||||
"""Liefert die Titel der beliebten Serien (Quelle: `/beliebte-serien`)."""
|
||||
@@ -1794,6 +1728,8 @@ class SerienstreamPlugin(BasisPlugin):
|
||||
|
||||
@staticmethod
|
||||
def _season_label(number: int) -> str:
|
||||
if number == 0:
|
||||
return "Filme"
|
||||
return f"Staffel {number}"
|
||||
|
||||
@staticmethod
|
||||
@@ -1808,6 +1744,8 @@ class SerienstreamPlugin(BasisPlugin):
|
||||
|
||||
@staticmethod
|
||||
def _parse_season_number(label: str) -> int | None:
|
||||
if (label or "").strip().casefold() == "filme":
|
||||
return 0
|
||||
digits = "".join(ch for ch in label if ch.isdigit())
|
||||
if not digits:
|
||||
return None
|
||||
|
||||
238
addon/plugins/youtube_plugin.py
Normal file
238
addon/plugins/youtube_plugin.py
Normal file
@@ -0,0 +1,238 @@
|
||||
"""YouTube Plugin fuer ViewIT.
|
||||
|
||||
Suche und Wiedergabe von YouTube-Videos via HTML-Scraping und yt-dlp.
|
||||
Benoetigt script.module.yt-dlp (optional).
|
||||
|
||||
Video-Eintraege werden als "Titel||VIDEO_ID" kodiert.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
from typing import Any, Callable, Dict, List, Optional, Set
|
||||
|
||||
try:
|
||||
import requests
|
||||
except ImportError:
|
||||
requests = None # type: ignore
|
||||
|
||||
from plugin_interface import BasisPlugin
|
||||
|
||||
try:
|
||||
import xbmc # type: ignore
|
||||
def _log(msg: str) -> None:
|
||||
xbmc.log(f"[ViewIt][YouTube] {msg}", xbmc.LOGWARNING)
|
||||
except ImportError:
|
||||
def _log(msg: str) -> None:
|
||||
pass
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Konstanten
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
DEFAULT_TIMEOUT = 20
|
||||
_SEP = "||" # Trennzeichen zwischen Titel und Video-ID
|
||||
|
||||
BASE_URL = "https://www.youtube.com"
|
||||
|
||||
HEADERS = {
|
||||
"User-Agent": (
|
||||
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 "
|
||||
"(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
|
||||
),
|
||||
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
|
||||
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
|
||||
}
|
||||
|
||||
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Hilfsfunktionen
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _encode(title: str, video_id: str) -> str:
|
||||
return f"{title}{_SEP}{video_id}"
|
||||
|
||||
|
||||
def _decode_id(entry: str) -> Optional[str]:
|
||||
"""Extrahiert Video-ID aus einem kodierten Eintrag."""
|
||||
if _SEP in entry:
|
||||
return entry.split(_SEP, 1)[1].strip()
|
||||
# Fallback: 11-Zeichen YouTube-ID am Ende
|
||||
m = re.search(r"([A-Za-z0-9_-]{11})$", entry)
|
||||
return m.group(1) if m else None
|
||||
|
||||
|
||||
def _decode_title(entry: str) -> str:
|
||||
if _SEP in entry:
|
||||
return entry.split(_SEP, 1)[0].strip()
|
||||
return entry
|
||||
|
||||
|
||||
def _get_session() -> Any:
|
||||
try:
|
||||
from http_session_pool import get_requests_session
|
||||
return get_requests_session("youtube", headers=HEADERS)
|
||||
except Exception:
|
||||
if requests:
|
||||
s = requests.Session()
|
||||
s.headers.update(HEADERS)
|
||||
return s
|
||||
return None
|
||||
|
||||
|
||||
def _extract_yt_initial_data(html: str) -> Optional[dict]:
|
||||
"""Extrahiert ytInitialData JSON aus dem HTML-Source."""
|
||||
m = re.search(r"var ytInitialData\s*=\s*(\{.*?\});\s*(?:var |</script>)", html, re.DOTALL)
|
||||
if not m:
|
||||
# Alternativer Pattern
|
||||
m = re.search(r"ytInitialData\s*=\s*(\{.+?\})\s*;", html, re.DOTALL)
|
||||
if not m:
|
||||
return None
|
||||
try:
|
||||
return json.loads(m.group(1))
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _videos_from_search_data(data: dict) -> List[str]:
|
||||
"""Extrahiert Video-Eintraege aus ytInitialData (Suchergebnisse)."""
|
||||
results: List[str] = []
|
||||
try:
|
||||
contents = (
|
||||
data
|
||||
.get("contents", {})
|
||||
.get("twoColumnSearchResultsRenderer", {})
|
||||
.get("primaryContents", {})
|
||||
.get("sectionListRenderer", {})
|
||||
.get("contents", [])
|
||||
)
|
||||
for section in contents:
|
||||
items = (
|
||||
section
|
||||
.get("itemSectionRenderer", {})
|
||||
.get("contents", [])
|
||||
)
|
||||
for item in items:
|
||||
vr = item.get("videoRenderer") or item.get("compactVideoRenderer")
|
||||
if not vr:
|
||||
continue
|
||||
video_id = vr.get("videoId", "").strip()
|
||||
if not video_id:
|
||||
continue
|
||||
title_runs = vr.get("title", {}).get("runs", [])
|
||||
title = "".join(r.get("text", "") for r in title_runs).strip()
|
||||
if not title:
|
||||
title = vr.get("title", {}).get("simpleText", "").strip()
|
||||
if title and video_id:
|
||||
results.append(_encode(title, video_id))
|
||||
except Exception as exc:
|
||||
_log(f"[YouTube] _videos_from_search_data Fehler: {exc}")
|
||||
return results
|
||||
|
||||
|
||||
|
||||
def _search_with_ytdlp(query: str, count: int = 20) -> List[str]:
|
||||
"""Sucht YouTube-Videos via yt-dlp ytsearch-Extraktor."""
|
||||
if not ensure_ytdlp_in_path():
|
||||
return []
|
||||
try:
|
||||
from yt_dlp import YoutubeDL # type: ignore
|
||||
except ImportError:
|
||||
return []
|
||||
ydl_opts = {"quiet": True, "no_warnings": True, "extract_flat": True}
|
||||
try:
|
||||
with YoutubeDL(ydl_opts) as ydl:
|
||||
info = ydl.extract_info(f"ytsearch{count}:{query}", download=False)
|
||||
if not info:
|
||||
return []
|
||||
return [
|
||||
_encode(e["title"], e["id"])
|
||||
for e in (info.get("entries") or [])
|
||||
if e.get("id") and e.get("title")
|
||||
]
|
||||
except Exception as exc:
|
||||
_log(f"[YouTube] yt-dlp Suche Fehler: {exc}")
|
||||
return []
|
||||
|
||||
|
||||
def _fetch_search_videos(url: str) -> List[str]:
|
||||
"""Holt Videos von einer YouTube-Suche via ytInitialData."""
|
||||
session = _get_session()
|
||||
if session is None:
|
||||
return []
|
||||
try:
|
||||
resp = session.get(url, timeout=DEFAULT_TIMEOUT)
|
||||
resp.raise_for_status()
|
||||
data = _extract_yt_initial_data(resp.text)
|
||||
if not data:
|
||||
return []
|
||||
return _videos_from_search_data(data)
|
||||
except Exception as exc:
|
||||
_log(f"[YouTube] _fetch_search_videos ({url}): {exc}")
|
||||
return []
|
||||
|
||||
|
||||
from ytdlp_helper import ensure_ytdlp_in_path, resolve_youtube_url
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Plugin
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class YoutubePlugin(BasisPlugin):
|
||||
name = "YouTube"
|
||||
|
||||
_SEASONS = ["Stream"]
|
||||
|
||||
def capabilities(self) -> Set[str]:
|
||||
return set()
|
||||
|
||||
async def search_titles(
|
||||
self,
|
||||
query: str,
|
||||
progress_callback: ProgressCallback = None,
|
||||
) -> List[str]:
|
||||
if not query.strip():
|
||||
return []
|
||||
# Primär: yt-dlp (robust, kein HTML-Scraping)
|
||||
results = _search_with_ytdlp(query)
|
||||
if results:
|
||||
return results
|
||||
# Fallback: HTML-Scraping
|
||||
if requests is None:
|
||||
return []
|
||||
url = f"{BASE_URL}/results?search_query={requests.utils.quote(query)}" # type: ignore
|
||||
return _fetch_search_videos(url)
|
||||
|
||||
def seasons_for(self, title: str) -> List[str]:
|
||||
return list(self._SEASONS)
|
||||
|
||||
def episodes_for(self, title: str, season: str) -> List[str]:
|
||||
if season == "Stream":
|
||||
return [title]
|
||||
return []
|
||||
|
||||
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
|
||||
video_id = _decode_id(episode) or _decode_id(title)
|
||||
if not video_id:
|
||||
return None
|
||||
return resolve_youtube_url(video_id)
|
||||
|
||||
def resolve_stream_link(self, link: str) -> Optional[str]:
|
||||
return link # bereits direkte URL
|
||||
|
||||
def metadata_for(self, title: str):
|
||||
"""Thumbnail aus Video-ID ableiten."""
|
||||
video_id = _decode_id(title)
|
||||
clean_title = _decode_title(title)
|
||||
info: Dict[str, str] = {"title": clean_title}
|
||||
art: Dict[str, str] = {}
|
||||
if video_id:
|
||||
art["thumb"] = f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"
|
||||
art["poster"] = f"https://i.ytimg.com/vi/{video_id}/maxresdefault.jpg"
|
||||
return info, art, None
|
||||
|
||||
|
||||
Plugin = YoutubePlugin
|
||||
@@ -1,35 +1,98 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<settings>
|
||||
<category label="Quellen">
|
||||
<setting id="serienstream_base_url" type="text" label="SerienStream Basis-URL" default="https://s.to" />
|
||||
<setting id="serienstream_catalog_search" type="bool" label="SerienStream: Katalog-Suche (mehr Ergebnisse, langsamer)" default="true" />
|
||||
<setting id="aniworld_base_url" type="text" label="AniWorld Basis-URL" default="https://aniworld.to" />
|
||||
<setting id="topstream_base_url" type="text" label="TopStream Basis-URL" default="https://topstreamfilm.live" />
|
||||
<setting id="einschalten_base_url" type="text" label="Einschalten Basis-URL" default="https://einschalten.in" />
|
||||
<setting id="filmpalast_base_url" type="text" label="Filmpalast Basis-URL" default="https://filmpalast.to" />
|
||||
<setting id="doku_streams_base_url" type="text" label="Doku-Streams Basis-URL" default="https://doku-streams.com" />
|
||||
|
||||
<category label="Wiedergabe">
|
||||
<setting id="autoplay_enabled" type="bool" label="Autoplay (bevorzugten Hoster automatisch waehlen)" default="false" />
|
||||
<setting id="preferred_hoster" type="enum" label="Bevorzugter Hoster" default="0" values="voe|streamtape|doodstream|vidoza|mixdrop|supervideo|dropload" enable="eq(-1,true)" />
|
||||
</category>
|
||||
|
||||
<category label="Globale Suche">
|
||||
<setting id="search_plugin_serienstream" type="bool" label="Serienstream durchsuchen" default="true" />
|
||||
<setting id="search_plugin_aniworld" type="bool" label="Aniworld durchsuchen" default="true" />
|
||||
<setting id="search_plugin_topstreamfilm" type="bool" label="Topstreamfilm durchsuchen" default="true" />
|
||||
<setting id="search_plugin_filmpalast" type="bool" label="Filmpalast durchsuchen" default="true" />
|
||||
<setting id="search_plugin_moflix" type="bool" label="Moflix durchsuchen" default="true" />
|
||||
<setting id="search_plugin_kkiste" type="bool" label="KKiste durchsuchen" default="true" />
|
||||
<setting id="search_plugin_hdfilme" type="bool" label="HDFilme durchsuchen" default="true" />
|
||||
<setting id="search_plugin_einschalten" type="bool" label="Einschalten durchsuchen" default="true" />
|
||||
<setting id="search_plugin_doku_streams" type="bool" label="Doku-Streams durchsuchen" default="true" />
|
||||
<setting id="search_plugin_netzkino" type="bool" label="NetzkKino durchsuchen" default="true" />
|
||||
<setting id="search_plugin_youtube" type="bool" label="YouTube durchsuchen" default="false" />
|
||||
</category>
|
||||
|
||||
<category label="Trakt">
|
||||
<setting id="trakt_enabled" type="bool" label="Trakt aktivieren" default="false" />
|
||||
<setting id="trakt_status" type="text" label="Status" default="Nicht verbunden" enable="false" />
|
||||
<setting id="trakt_auth" type="action" label="Trakt autorisieren" action="RunPlugin(plugin://plugin.video.viewit/?action=trakt_auth)" option="close" />
|
||||
<setting id="trakt_scrobble" type="bool" label="Scrobbling aktivieren" default="true" enable="eq(-3,true)" />
|
||||
<setting id="trakt_auto_watchlist" type="bool" label="Geschaute Serien automatisch zur Watchlist hinzufuegen" default="false" enable="eq(-4,true)" />
|
||||
<setting id="trakt_access_token" type="text" label="" default="" visible="false" />
|
||||
<setting id="trakt_refresh_token" type="text" label="" default="" visible="false" />
|
||||
<setting id="trakt_token_expires" type="text" label="" default="0" visible="false" />
|
||||
</category>
|
||||
|
||||
<category label="Metadaten">
|
||||
<setting id="serienstream_metadata_source" type="enum" label="SerienStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="aniworld_metadata_source" type="enum" label="AniWorld Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="topstreamfilm_metadata_source" type="enum" label="TopStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="einschalten_metadata_source" type="enum" label="Einschalten Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="filmpalast_metadata_source" type="enum" label="Filmpalast Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="doku_streams_metadata_source" type="enum" label="Doku-Streams Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="kkiste_metadata_source" type="enum" label="KKiste Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="moflix_metadata_source" type="enum" label="Moflix Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" />
|
||||
<setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" />
|
||||
<setting id="tmdb_show_plot" type="bool" label="TMDB Beschreibung anzeigen" default="true" />
|
||||
<setting id="tmdb_show_art" type="bool" label="TMDB Poster und Vorschaubild anzeigen" default="true" />
|
||||
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
|
||||
<setting id="tmdb_show_rating" type="bool" label="TMDB Bewertung anzeigen" default="true" />
|
||||
<setting id="tmdb_show_votes" type="bool" label="TMDB Stimmen anzeigen" default="false" />
|
||||
<setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" enable="eq(-1,true)" />
|
||||
<setting id="tmdb_show_plot" type="bool" label="Beschreibung anzeigen" default="true" enable="eq(-2,true)" />
|
||||
<setting id="tmdb_show_art" type="bool" label="Poster und Vorschaubild anzeigen" default="true" enable="eq(-3,true)" />
|
||||
<setting id="tmdb_show_fanart" type="bool" label="Fanart/Backdrop anzeigen" default="true" enable="eq(-4,true)" />
|
||||
<setting id="tmdb_show_rating" type="bool" label="Bewertung anzeigen" default="true" enable="eq(-5,true)" />
|
||||
<setting id="tmdb_show_votes" type="bool" label="Stimmen anzeigen" default="false" enable="eq(-6,true)" />
|
||||
<setting type="lsep" label="Metadatenquellen" />
|
||||
<setting id="serienstream_metadata_source" type="enum" label="SerienStream" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="aniworld_metadata_source" type="enum" label="AniWorld" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="topstreamfilm_metadata_source" type="enum" label="TopStream" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="einschalten_metadata_source" type="enum" label="Einschalten" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="filmpalast_metadata_source" type="enum" label="Filmpalast" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="doku_streams_metadata_source" type="enum" label="Doku-Streams" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="kkiste_metadata_source" type="enum" label="KKiste" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
<setting id="moflix_metadata_source" type="enum" label="Moflix" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
|
||||
</category>
|
||||
|
||||
<category label="Updates">
|
||||
<setting id="update_installed_version" type="text" label="Installierte Version" default="-" enable="false" />
|
||||
<setting id="update_available_selected" type="text" label="Verfuegbar (gewaehlter Kanal)" default="-" enable="false" />
|
||||
<setting id="update_channel" type="enum" label="Update-Kanal" default="1" values="Main|Nightly|Custom|Dev" />
|
||||
<setting id="apply_update_channel" type="action" label="Update-Kanal jetzt anwenden" action="RunPlugin(plugin://plugin.video.viewit/?action=apply_update_channel)" option="close" />
|
||||
<setting id="auto_update_enabled" type="bool" label="Automatische Updates (beim Start pruefen)" default="false" />
|
||||
<setting id="auto_update_interval" type="enum" label="Update-Pruefintervall" default="1" values="1 Stunde|6 Stunden|24 Stunden" enable="eq(-1,true)" />
|
||||
<setting id="select_update_version" type="action" label="Version waehlen und installieren" action="RunPlugin(plugin://plugin.video.viewit/?action=select_update_version)" option="close" />
|
||||
<setting type="lsep" label="ResolveURL" />
|
||||
<setting id="install_resolveurl" type="action" label="ResolveURL installieren/reparieren" action="RunPlugin(plugin://plugin.video.viewit/?action=install_resolveurl)" option="close" />
|
||||
<setting id="resolveurl_auto_install" type="bool" label="ResolveURL automatisch installieren (beim Start pruefen)" default="true" />
|
||||
<setting id="resolveurl_status" type="text" label="ResolveURL Status" default="-" enable="false" />
|
||||
<setting type="lsep" label="Kanaldetails" />
|
||||
<setting id="update_active_channel" type="text" label="Aktiver Kanal" default="-" enable="false" />
|
||||
<setting id="update_active_repo_url" type="text" label="Aktive Repo URL" default="-" enable="false" />
|
||||
<setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
|
||||
<setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
|
||||
<setting id="update_repo_url_dev" type="text" label="Dev URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/dev/addons.xml" />
|
||||
<setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
|
||||
<setting id="auto_update_last_ts" type="text" label="" default="0" visible="false" />
|
||||
<setting id="resolveurl_last_ts" type="text" label="" default="0" visible="false" />
|
||||
</category>
|
||||
|
||||
<category label="YouTube">
|
||||
<setting id="youtube_quality" type="enum" label="YouTube Videoqualitaet" default="0" values="Beste|1080p|720p|480p|360p" />
|
||||
<setting id="install_ytdlp" type="action" label="yt-dlp installieren/reparieren" action="RunPlugin(plugin://plugin.video.viewit/?action=install_ytdlp)" option="close" />
|
||||
<setting id="ytdlp_status" type="text" label="yt-dlp Status" default="-" enable="false" />
|
||||
</category>
|
||||
|
||||
<category label="Anzeige">
|
||||
<setting id="filmpalast_max_page_items" type="number" label="Filmpalast: Max. Eintraege pro Seite" default="15" />
|
||||
<setting id="topstreamfilm_max_page_items" type="number" label="TopStream: Max. Eintraege pro Seite" default="15" />
|
||||
<setting id="aniworld_max_page_items" type="number" label="AniWorld: Max. Eintraege pro Seite" default="15" />
|
||||
<setting id="netzkkino_max_page_items" type="number" label="Netzkino: Max. Eintraege pro Seite" default="15" />
|
||||
<setting id="kkiste_max_page_items" type="number" label="KKiste: Max. Eintraege pro Seite" default="15" />
|
||||
<setting id="hdfilme_max_page_items" type="number" label="HDFilme: Max. Eintraege pro Seite" default="15" />
|
||||
<setting id="moflix_max_page_items" type="number" label="Moflix: Max. Eintraege pro Seite" default="15" />
|
||||
<setting id="einschalten_max_page_items" type="number" label="Einschalten: Max. Eintraege pro Seite" default="15" />
|
||||
</category>
|
||||
|
||||
<category label="TMDB Erweitert">
|
||||
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
|
||||
<setting id="tmdb_api_key" type="text" label="TMDB API Key (optional)" default="" />
|
||||
<setting id="tmdb_api_key_active" type="text" label="Aktiver TMDB API Key" default="" enable="false" />
|
||||
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: gleichzeitige Anfragen (1-20)" default="6" />
|
||||
<setting id="tmdb_show_cast" type="bool" label="TMDB Besetzung anzeigen" default="false" />
|
||||
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
|
||||
@@ -38,44 +101,16 @@
|
||||
<setting id="tmdb_log_responses" type="bool" label="TMDB API-Antworten loggen" default="false" />
|
||||
</category>
|
||||
|
||||
<category label="Wiedergabe">
|
||||
<setting id="autoplay_enabled" type="bool" label="Autoplay (bevorzugten Hoster automatisch waehlen)" default="false" />
|
||||
<setting id="preferred_hoster" type="text" label="Bevorzugter Hoster" default="voe" />
|
||||
<category label="Quellen">
|
||||
<setting id="serienstream_base_url" type="text" label="SerienStream Basis-URL" default="https://s.to" />
|
||||
<setting id="aniworld_base_url" type="text" label="AniWorld Basis-URL" default="https://aniworld.to" />
|
||||
<setting id="topstream_base_url" type="text" label="TopStream Basis-URL" default="https://topstreamfilm.live" />
|
||||
<setting id="einschalten_base_url" type="text" label="Einschalten Basis-URL" default="https://einschalten.in" />
|
||||
<setting id="filmpalast_base_url" type="text" label="Filmpalast Basis-URL" default="https://filmpalast.to" />
|
||||
<setting id="doku_streams_base_url" type="text" label="Doku-Streams Basis-URL" default="https://doku-streams.com" />
|
||||
</category>
|
||||
|
||||
<category label="Updates">
|
||||
<setting id="update_channel" type="enum" label="Update-Kanal" default="1" values="Main|Nightly|Custom|Dev" />
|
||||
<setting id="apply_update_channel" type="action" label="Update-Kanal jetzt anwenden" action="RunPlugin(plugin://plugin.video.viewit/?action=apply_update_channel)" option="close" />
|
||||
<setting id="auto_update_enabled" type="bool" label="Automatische Updates (beim Start pruefen)" default="false" />
|
||||
<setting id="auto_update_interval" type="enum" label="Update-Pruefintervall" default="1" values="1 Stunde|6 Stunden|24 Stunden" />
|
||||
<setting id="select_update_version" type="action" label="Version waehlen und installieren" action="RunPlugin(plugin://plugin.video.viewit/?action=select_update_version)" option="close" />
|
||||
<setting id="install_resolveurl" type="action" label="ResolveURL installieren/reparieren" action="RunPlugin(plugin://plugin.video.viewit/?action=install_resolveurl)" option="close" />
|
||||
<setting id="resolveurl_auto_install" type="bool" label="ResolveURL automatisch installieren (beim Start pruefen)" default="true" />
|
||||
<setting id="update_installed_version" type="text" label="Installierte Version" default="-" enable="false" />
|
||||
<setting id="update_available_selected" type="text" label="Verfuegbar (gewaehlter Kanal)" default="-" enable="false" />
|
||||
<setting id="resolveurl_status" type="text" label="ResolveURL Status" default="-" enable="false" />
|
||||
<setting id="update_active_channel" type="text" label="Aktiver Kanal" default="-" enable="false" />
|
||||
<setting id="update_active_repo_url" type="text" label="Aktive Repo URL" default="-" enable="false" />
|
||||
<setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
|
||||
<setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
|
||||
<setting id="update_repo_url_dev" type="text" label="Dev URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/dev/addons.xml" />
|
||||
<setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
|
||||
<setting id="auto_update_last_ts" type="text" label="Auto-Update letzte Pruefung (intern)" default="0" visible="false" />
|
||||
<setting id="resolveurl_last_ts" type="text" label="ResolveURL letzte Pruefung (intern)" default="0" visible="false" />
|
||||
</category>
|
||||
|
||||
<category label="Trakt">
|
||||
<setting id="trakt_enabled" type="bool" label="Trakt aktivieren" default="false" />
|
||||
<setting id="trakt_client_id" type="text" label="Trakt Client ID" default="" />
|
||||
<setting id="trakt_client_secret" type="text" label="Trakt Client Secret" default="" />
|
||||
<setting id="trakt_auth" type="action" label="Trakt autorisieren" action="RunPlugin(plugin://plugin.video.viewit/?action=trakt_auth)" option="close" />
|
||||
<setting id="trakt_scrobble" type="bool" label="Scrobbling aktivieren" default="true" />
|
||||
<setting id="trakt_access_token" type="text" label="" default="" visible="false" />
|
||||
<setting id="trakt_refresh_token" type="text" label="" default="" visible="false" />
|
||||
<setting id="trakt_token_expires" type="text" label="" default="0" visible="false" />
|
||||
</category>
|
||||
|
||||
<category label="Debug Global">
|
||||
<category label="Debug">
|
||||
<setting id="debug_log_urls" type="bool" label="URLs mitschreiben (global)" default="false" />
|
||||
<setting id="debug_dump_html" type="bool" label="HTML speichern (global)" default="false" />
|
||||
<setting id="debug_show_url_info" type="bool" label="Aktuelle URL anzeigen (global)" default="false" />
|
||||
@@ -83,32 +118,31 @@
|
||||
<setting id="log_max_mb" type="number" label="URL-Log: maximale Dateigroesse (MB)" default="5" />
|
||||
<setting id="log_max_files" type="number" label="URL-Log: Anzahl alter Dateien" default="3" />
|
||||
<setting id="dump_max_files" type="number" label="HTML: maximale Dateien pro Plugin" default="200" />
|
||||
</category>
|
||||
|
||||
<category label="Debug Quellen">
|
||||
<setting type="lsep" label="Pro Quelle" />
|
||||
<setting id="log_urls_serienstream" type="bool" label="SerienStream: URLs mitschreiben" default="false" />
|
||||
<setting id="dump_html_serienstream" type="bool" label="SerienStream: HTML speichern" default="false" />
|
||||
<setting id="show_url_info_serienstream" type="bool" label="SerienStream: Aktuelle URL anzeigen" default="false" />
|
||||
<setting id="log_errors_serienstream" type="bool" label="SerienStream: Fehler mitschreiben" default="false" />
|
||||
|
||||
<setting id="log_urls_aniworld" type="bool" label="AniWorld: URLs mitschreiben" default="false" />
|
||||
<setting id="dump_html_aniworld" type="bool" label="AniWorld: HTML speichern" default="false" />
|
||||
<setting id="show_url_info_aniworld" type="bool" label="AniWorld: Aktuelle URL anzeigen" default="false" />
|
||||
<setting id="log_errors_aniworld" type="bool" label="AniWorld: Fehler mitschreiben" default="false" />
|
||||
|
||||
<setting id="log_urls_topstreamfilm" type="bool" label="TopStream: URLs mitschreiben" default="false" />
|
||||
<setting id="dump_html_topstreamfilm" type="bool" label="TopStream: HTML speichern" default="false" />
|
||||
<setting id="show_url_info_topstreamfilm" type="bool" label="TopStream: Aktuelle URL anzeigen" default="false" />
|
||||
<setting id="log_errors_topstreamfilm" type="bool" label="TopStream: Fehler mitschreiben" default="false" />
|
||||
|
||||
<setting id="log_urls_einschalten" type="bool" label="Einschalten: URLs mitschreiben" default="false" />
|
||||
<setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML speichern" default="false" />
|
||||
<setting id="show_url_info_einschalten" type="bool" label="Einschalten: Aktuelle URL anzeigen" default="false" />
|
||||
<setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler mitschreiben" default="false" />
|
||||
|
||||
<setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URLs mitschreiben" default="false" />
|
||||
<setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML speichern" default="false" />
|
||||
<setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: Aktuelle URL anzeigen" default="false" />
|
||||
<setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler mitschreiben" default="false" />
|
||||
</category>
|
||||
|
||||
<category label="Intern">
|
||||
<setting id="changelog_last_shown_version" type="text" label="" default="" visible="false" />
|
||||
</category>
|
||||
|
||||
</settings>
|
||||
|
||||
185
addon/ytdlp_helper.py
Normal file
185
addon/ytdlp_helper.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""Gemeinsame yt-dlp Hilfsfunktionen fuer YouTube-Wiedergabe.
|
||||
|
||||
Wird von youtube_plugin und dokustreams_plugin genutzt.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
try:
|
||||
import xbmc # type: ignore
|
||||
def _log(msg: str) -> None:
|
||||
xbmc.log(f"[ViewIt][yt-dlp] {msg}", xbmc.LOGWARNING)
|
||||
except ImportError:
|
||||
def _log(msg: str) -> None:
|
||||
pass
|
||||
|
||||
|
||||
_YT_ID_RE = re.compile(
|
||||
r"(?:youtube(?:-nocookie)?\.com/(?:embed/|v/|watch\?.*?v=)|youtu\.be/)"
|
||||
r"([A-Za-z0-9_-]{11})"
|
||||
)
|
||||
|
||||
|
||||
def extract_youtube_id(url: str) -> Optional[str]:
|
||||
"""Extrahiert eine YouTube Video-ID aus verschiedenen URL-Formaten."""
|
||||
if not url:
|
||||
return None
|
||||
m = _YT_ID_RE.search(url)
|
||||
return m.group(1) if m else None
|
||||
|
||||
|
||||
def _fix_strptime() -> None:
|
||||
"""Kodi-Workaround: datetime.strptime Race Condition vermeiden.
|
||||
|
||||
Kodi's eingebetteter Python kann in Multi-Thread-Umgebungen dazu fuehren
|
||||
dass der lazy _strptime-Import fehlschlaegt. Wir importieren das Modul
|
||||
direkt, damit es beim yt-dlp Aufruf bereits geladen ist.
|
||||
"""
|
||||
try:
|
||||
import _strptime # noqa: F401 – erzwingt den internen Import
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def ensure_ytdlp_in_path() -> bool:
|
||||
"""Fuegt script.module.yt-dlp/lib zum sys.path hinzu falls noetig."""
|
||||
_fix_strptime()
|
||||
try:
|
||||
import yt_dlp # type: ignore # noqa: F401
|
||||
return True
|
||||
except ImportError:
|
||||
pass
|
||||
try:
|
||||
import sys, os
|
||||
import xbmcvfs # type: ignore
|
||||
lib_path = xbmcvfs.translatePath("special://home/addons/script.module.yt-dlp/lib")
|
||||
if lib_path and os.path.isdir(lib_path) and lib_path not in sys.path:
|
||||
sys.path.insert(0, lib_path)
|
||||
import yt_dlp # type: ignore # noqa: F401
|
||||
return True
|
||||
except Exception:
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
def get_quality_format() -> str:
|
||||
"""Liest YouTube-Qualitaet aus den Addon-Einstellungen."""
|
||||
_QUALITY_MAP = {
|
||||
"0": "bestvideo[ext=mp4][vcodec^=avc1]+bestaudio[ext=m4a]/bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best",
|
||||
"1": "bestvideo[height<=1080][ext=mp4][vcodec^=avc1]+bestaudio[ext=m4a]/bestvideo[height<=1080][ext=mp4]+bestaudio[ext=m4a]/best[height<=1080][ext=mp4]/best",
|
||||
"2": "bestvideo[height<=720][ext=mp4][vcodec^=avc1]+bestaudio[ext=m4a]/bestvideo[height<=720][ext=mp4]+bestaudio[ext=m4a]/best[height<=720][ext=mp4]/best",
|
||||
"3": "bestvideo[height<=480][ext=mp4][vcodec^=avc1]+bestaudio[ext=m4a]/bestvideo[height<=480][ext=mp4]+bestaudio[ext=m4a]/best[height<=480][ext=mp4]/best",
|
||||
"4": "bestvideo[height<=360][ext=mp4][vcodec^=avc1]+bestaudio[ext=m4a]/bestvideo[height<=360][ext=mp4]+bestaudio[ext=m4a]/best[height<=360][ext=mp4]/best",
|
||||
}
|
||||
try:
|
||||
import xbmcaddon # type: ignore
|
||||
val = xbmcaddon.Addon().getSetting("youtube_quality") or "0"
|
||||
return _QUALITY_MAP.get(val, _QUALITY_MAP["0"])
|
||||
except Exception:
|
||||
return _QUALITY_MAP["0"]
|
||||
|
||||
|
||||
_AUDIO_SEP = "||AUDIO||"
|
||||
_META_SEP = "||META||"
|
||||
|
||||
|
||||
def resolve_youtube_url(video_id: str) -> Optional[str]:
|
||||
"""Loest eine YouTube Video-ID via yt-dlp zu einer direkten Stream-URL auf.
|
||||
|
||||
Bei getrennten Video+Audio-Streams wird der Rueckgabestring im Format
|
||||
``video_url||AUDIO||audio_url||META||key=val,key=val,...`` kodiert.
|
||||
Der Aufrufer kann mit ``split_video_audio()`` alle Teile trennen.
|
||||
"""
|
||||
if not ensure_ytdlp_in_path():
|
||||
_log("yt-dlp nicht verfuegbar (script.module.yt-dlp fehlt)")
|
||||
try:
|
||||
import xbmcgui # type: ignore
|
||||
xbmcgui.Dialog().notification(
|
||||
"yt-dlp fehlt",
|
||||
"Bitte yt-dlp in den ViewIT-Einstellungen installieren.",
|
||||
xbmcgui.NOTIFICATION_ERROR,
|
||||
5000,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
try:
|
||||
from yt_dlp import YoutubeDL # type: ignore
|
||||
except ImportError:
|
||||
return None
|
||||
url = f"https://www.youtube.com/watch?v={video_id}"
|
||||
fmt = get_quality_format()
|
||||
ydl_opts: Dict[str, Any] = {
|
||||
"format": fmt,
|
||||
"quiet": True,
|
||||
"no_warnings": True,
|
||||
"extract_flat": False,
|
||||
}
|
||||
try:
|
||||
with YoutubeDL(ydl_opts) as ydl:
|
||||
info = ydl.extract_info(url, download=False)
|
||||
if not info:
|
||||
return None
|
||||
duration = int(info.get("duration") or 0)
|
||||
# Einzelne URL (kombinierter Stream)
|
||||
direct = info.get("url")
|
||||
if direct:
|
||||
return direct
|
||||
# Getrennte Video+Audio-Streams (hoehere Qualitaet)
|
||||
rf = info.get("requested_formats")
|
||||
if rf and len(rf) >= 2:
|
||||
vf, af = rf[0], rf[1]
|
||||
video_url = vf.get("url")
|
||||
audio_url = af.get("url")
|
||||
if video_url and audio_url:
|
||||
vcodec = vf.get("vcodec") or "avc1.640028"
|
||||
acodec = af.get("acodec") or "mp4a.40.2"
|
||||
w = int(vf.get("width") or 1920)
|
||||
h = int(vf.get("height") or 1080)
|
||||
fps = int(vf.get("fps") or 25)
|
||||
vbr = int((vf.get("tbr") or 5000) * 1000)
|
||||
abr = int((af.get("tbr") or 128) * 1000)
|
||||
asr = int(af.get("asr") or 44100)
|
||||
ach = int(af.get("audio_channels") or 2)
|
||||
meta = (
|
||||
f"vc={vcodec},ac={acodec},"
|
||||
f"w={w},h={h},fps={fps},"
|
||||
f"vbr={vbr},abr={abr},"
|
||||
f"asr={asr},ach={ach},dur={duration}"
|
||||
)
|
||||
_log(f"Getrennte Streams: {h}p {vcodec} + {acodec}")
|
||||
return f"{video_url}{_AUDIO_SEP}{audio_url}{_META_SEP}{meta}"
|
||||
if video_url:
|
||||
return video_url
|
||||
# Fallback: letztes Format
|
||||
formats = info.get("formats", [])
|
||||
if formats:
|
||||
return formats[-1].get("url")
|
||||
except Exception as exc:
|
||||
_log(f"yt-dlp Fehler fuer {video_id}: {exc}")
|
||||
return None
|
||||
|
||||
|
||||
def split_video_audio(url: str) -> tuple:
|
||||
"""Trennt eine URL in (video_url, audio_url, meta_dict).
|
||||
|
||||
Falls kein Audio-Teil vorhanden: (url, None, {}).
|
||||
meta_dict enthaelt Keys: vc, ac, w, h, fps, vbr, abr, asr, ach, dur
|
||||
"""
|
||||
if _AUDIO_SEP not in url:
|
||||
return url, None, {}
|
||||
parts = url.split(_AUDIO_SEP, 1)
|
||||
video_url = parts[0]
|
||||
rest = parts[1]
|
||||
meta: Dict[str, str] = {}
|
||||
audio_url = rest
|
||||
if _META_SEP in rest:
|
||||
audio_url, meta_str = rest.split(_META_SEP, 1)
|
||||
for pair in meta_str.split(","):
|
||||
if "=" in pair:
|
||||
k, v = pair.split("=", 1)
|
||||
meta[k] = v
|
||||
return video_url, audio_url, meta
|
||||
113
docs/TRAKT.md
Normal file
113
docs/TRAKT.md
Normal file
@@ -0,0 +1,113 @@
|
||||
Trakt in ViewIT – Benutzeranleitung
|
||||
|
||||
|
||||
Was ist Trakt?
|
||||
|
||||
Trakt (https://trakt.tv) ist ein kostenloser Dienst, der verfolgt welche Serien und Filme du schaust. Damit kannst du:
|
||||
- Sehen, wo du bei einer Serie aufgehoert hast
|
||||
- Neue Episoden deiner Serien im Blick behalten
|
||||
- Deinen kompletten Schauverlauf geraeteuebergreifend synchronisieren
|
||||
|
||||
|
||||
Einrichtung
|
||||
|
||||
1) Trakt-Konto erstellen
|
||||
|
||||
Falls du noch kein Konto hast, registriere dich kostenlos auf https://trakt.tv/auth/join
|
||||
|
||||
2) Trakt in ViewIT aktivieren
|
||||
|
||||
- Oeffne ViewIT in Kodi
|
||||
- Gehe zu Einstellungen (Zahnrad-Symbol oder Kontextmenue)
|
||||
- Wechsle zur Kategorie "Trakt"
|
||||
- Setze "Trakt aktivieren" auf An
|
||||
|
||||
3) Trakt autorisieren
|
||||
|
||||
- Klicke auf "Trakt autorisieren"
|
||||
- ViewIT zeigt dir einen Code und eine URL an
|
||||
- Oeffne https://trakt.tv/activate in einem Browser (Handy oder PC)
|
||||
- Melde dich an und gib den angezeigten Code ein
|
||||
- Bestaetige die Autorisierung
|
||||
- ViewIT erkennt die Freigabe automatisch – fertig!
|
||||
|
||||
Die Autorisierung bleibt dauerhaft gespeichert. Du musst das nur einmal machen.
|
||||
|
||||
|
||||
Einstellungen
|
||||
|
||||
- Trakt aktivieren: Schaltet alle Trakt-Funktionen ein oder aus
|
||||
- Trakt autorisieren: Verbindet ViewIT mit deinem Trakt-Konto
|
||||
- Scrobbling aktivieren: Sendet automatisch an Trakt, was du gerade schaust
|
||||
- Geschaute Serien automatisch zur Watchlist hinzufuegen: Fuegt Serien/Filme beim Schauen automatisch zu deiner Trakt-Watchlist hinzu, damit sie bei "Upcoming" erscheinen
|
||||
|
||||
|
||||
Menues im Hauptmenue
|
||||
|
||||
Wenn Trakt aktiviert ist, erscheint im ViewIT-Hauptmenue ein Untermenüpunkt "Trakt" (nach allen Quellen-Plugins).
|
||||
|
||||
Ein Klick darauf oeffnet das Trakt-Untermenue mit folgenden Eintraegen (nur wenn bereits autorisiert):
|
||||
|
||||
|
||||
Weiterschauen
|
||||
|
||||
Zeigt Serien, bei denen du mittendrin aufgehoert hast. Praktisch um schnell dort weiterzumachen, wo du zuletzt warst.
|
||||
|
||||
|
||||
Trakt Upcoming
|
||||
|
||||
Zeigt neue Episoden der naechsten 14 Tage fuer alle Serien in deiner Trakt-Watchlist. Die Ansicht ist nach Datum gruppiert:
|
||||
|
||||
- Heute – Episoden, die heute erscheinen
|
||||
- Morgen – Episoden von morgen
|
||||
- Wochentag – z.B. "Mittwoch", "Donnerstag"
|
||||
- Wochentag + Datum – ab naechster Woche, z.B. "Montag 24.03."
|
||||
|
||||
Jeder Eintrag zeigt Serienname, Staffel/Episode und Episodentitel, z.B.:
|
||||
Game of Thrones – S02E05: The Wolf and the Lion
|
||||
|
||||
Damit eine Serie hier erscheint, muss sie in deiner Trakt-Watchlist sein. Du kannst Serien auf drei Wegen hinzufuegen:
|
||||
- Direkt auf trakt.tv
|
||||
- Ueber das Kontextmenue in der Trakt History (siehe unten)
|
||||
- Automatisch beim Schauen (Einstellung "Geschaute Serien automatisch zur Watchlist hinzufuegen")
|
||||
|
||||
|
||||
Trakt Watchlist
|
||||
|
||||
Zeigt alle Titel in deiner Trakt-Watchlist, unterteilt in Filme und Serien.
|
||||
Ein Klick auf einen Eintrag fuehrt zur Staffel-/Episodenauswahl in ViewIT.
|
||||
|
||||
|
||||
Trakt History
|
||||
|
||||
Zeigt deine zuletzt geschauten Episoden und Filme (seitenweise, neueste zuerst). Jeder Eintrag zeigt Serienname mit Staffel, Episode, Episodentitel und Poster.
|
||||
|
||||
Kontextmenue (lange druecken oder Taste "C"):
|
||||
- "Zur Trakt-Watchlist hinzufuegen" – Fuegt die Serie/den Film zu deiner Watchlist hinzu, damit kuenftige Episoden bei "Upcoming" erscheinen
|
||||
|
||||
|
||||
Scrobbling
|
||||
|
||||
Scrobbling bedeutet, dass ViewIT automatisch an Trakt meldet was du schaust:
|
||||
|
||||
- Du startest eine Episode oder einen Film in ViewIT
|
||||
- ViewIT sendet "Start" an Trakt (die Episode erscheint als "Watching" in deinem Profil)
|
||||
- Wenn die Wiedergabe endet, sendet ViewIT "Stop" mit dem Fortschritt
|
||||
- Hat der Fortschritt mindestens 80% erreicht, markiert Trakt die Episode als gesehen
|
||||
|
||||
Das passiert vollautomatisch im Hintergrund – du musst nichts tun.
|
||||
|
||||
|
||||
Haeufige Fragen
|
||||
|
||||
Warum erscheint eine Serie nicht bei "Upcoming"?
|
||||
Die Serie muss in deiner Trakt-Watchlist sein. Fuege sie ueber die Trakt History (Kontextmenue) oder direkt auf trakt.tv hinzu.
|
||||
|
||||
Warum wird eine Episode nicht als gesehen markiert?
|
||||
Trakt markiert Episoden erst als gesehen, wenn mindestens ca. 80% geschaut wurden. Wenn du vorher abbrichst, wird sie nicht als gesehen gezaehlt.
|
||||
|
||||
Kann ich Trakt auf mehreren Geraeten nutzen?
|
||||
Ja. Autorisiere ViewIT auf jedem Geraet und alle teilen denselben Schauverlauf ueber dein Trakt-Konto.
|
||||
|
||||
Muss ich online sein?
|
||||
Ja, Trakt benoetigt eine Internetverbindung. Ohne Verbindung funktioniert die Wiedergabe weiterhin, aber Scrobbling und Trakt-Menues sind nicht verfuegbar.
|
||||
@@ -34,6 +34,10 @@ PY
|
||||
ZIP_NAME="${ADDON_ID}-${ADDON_VERSION}.zip"
|
||||
ZIP_PATH="${INSTALL_DIR}/${ZIP_NAME}"
|
||||
|
||||
CHANGELOG_USER="${SRC_ADDON_DIR}/CHANGELOG-USER.md"
|
||||
python3 "${ROOT_DIR}/scripts/update_user_changelog.py" --fill "${ADDON_XML}" "${CHANGELOG_USER}" >&2
|
||||
python3 "${ROOT_DIR}/scripts/update_user_changelog.py" --check "${ADDON_XML}" "${CHANGELOG_USER}" >&2 || exit 1
|
||||
|
||||
ADDON_DIR="$("${ROOT_DIR}/scripts/build_install_addon.sh" >/dev/null; echo "${INSTALL_DIR}/${ADDON_ID}")"
|
||||
|
||||
rm -f "${ZIP_PATH}"
|
||||
|
||||
@@ -15,19 +15,3 @@ msg=$(cat "$1")
|
||||
updated_msg=$(echo "$msg" | sed -E "s/bump to [0-9]+\.[0-9]+\.[0-9]+(\.[0-9]+)?[^ ]*/bump to ${version}/g")
|
||||
echo "$updated_msg" > "$1"
|
||||
|
||||
today=$(date +%Y-%m-%d)
|
||||
|
||||
# Changelog-Eintrag aufbauen
|
||||
{
|
||||
echo "## ${version} - ${today}"
|
||||
echo ""
|
||||
while IFS= read -r line; do
|
||||
[[ -z "$line" ]] && continue
|
||||
echo "- ${line}"
|
||||
done <<< "$updated_msg"
|
||||
echo ""
|
||||
cat CHANGELOG-DEV.md
|
||||
} > /tmp/changelog_new.md
|
||||
|
||||
mv /tmp/changelog_new.md CHANGELOG-DEV.md
|
||||
git add CHANGELOG-DEV.md
|
||||
|
||||
43
scripts/hooks/prepare-commit-msg
Executable file
43
scripts/hooks/prepare-commit-msg
Executable file
@@ -0,0 +1,43 @@
|
||||
#!/bin/bash
|
||||
# prepare-commit-msg: Changelog-Eintrag in CHANGELOG-DEV.md schreiben (nur dev-Branch)
|
||||
# Laeuft nach pre-commit (Version bereits gebumpt) und vor commit-msg.
|
||||
# git add funktioniert hier zuverlässig für den aktuellen Commit.
|
||||
|
||||
branch=$(git symbolic-ref --short HEAD 2>/dev/null)
|
||||
[[ "$branch" != "dev" ]] && exit 0
|
||||
|
||||
root=$(git rev-parse --show-toplevel)
|
||||
cd "$root"
|
||||
|
||||
# Nur bei normalem Commit (nicht amend, merge, squash)
|
||||
commit_type="${2:-}"
|
||||
[[ "$commit_type" == "merge" || "$commit_type" == "squash" ]] && exit 0
|
||||
|
||||
# Aktuelle Version aus addon.xml (bereits vom pre-commit Hook hochgezaehlt)
|
||||
version=$(grep -oP 'version="\K[0-9]+\.[0-9]+\.[0-9]+(\.[0-9]+)?[^"]*' addon/addon.xml | head -1)
|
||||
[[ -z "$version" ]] && exit 0
|
||||
|
||||
# Commit-Message aus der Datei lesen (bereits vom User eingegeben oder per -m übergeben)
|
||||
msg=$(cat "$1")
|
||||
# Kommentarzeilen entfernen
|
||||
msg=$(echo "$msg" | grep -v '^#' | sed '/^[[:space:]]*$/d' | head -1)
|
||||
[[ -z "$msg" ]] && exit 0
|
||||
|
||||
today=$(date +%Y-%m-%d)
|
||||
|
||||
# Prüfen ob dieser Versions-Eintrag bereits existiert (Doppel-Eintrag verhindern)
|
||||
if grep -q "^## ${version} " CHANGELOG-DEV.md 2>/dev/null; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Changelog-Eintrag aufbauen und prependen
|
||||
{
|
||||
echo "## ${version} - ${today}"
|
||||
echo ""
|
||||
echo "- ${msg}"
|
||||
echo ""
|
||||
cat CHANGELOG-DEV.md
|
||||
} > /tmp/changelog_new.md
|
||||
|
||||
mv /tmp/changelog_new.md CHANGELOG-DEV.md
|
||||
git add CHANGELOG-DEV.md
|
||||
238
scripts/update_user_changelog.py
Normal file
238
scripts/update_user_changelog.py
Normal file
@@ -0,0 +1,238 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Pflegt CHANGELOG-USER.md vor dem ZIP-Build.
|
||||
|
||||
Modi:
|
||||
--fill Liest Commits seit letztem Tag, generiert lesbaren Text und ersetzt
|
||||
den Platzhalter im Abschnitt der aktuellen Version.
|
||||
Legt den Abschnitt an falls noch nicht vorhanden.
|
||||
--check Prüft ob ein gefüllter Abschnitt für VERSION existiert.
|
||||
Exit 0 = OK, Exit 1 = fehlt oder ist Platzhalter.
|
||||
|
||||
Aufruf durch build_kodi_zip.sh:
|
||||
python3 scripts/update_user_changelog.py --fill addon/addon.xml addon/CHANGELOG-USER.md
|
||||
python3 scripts/update_user_changelog.py --check addon/addon.xml addon/CHANGELOG-USER.md
|
||||
"""
|
||||
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import xml.etree.ElementTree as ET
|
||||
from pathlib import Path
|
||||
|
||||
PLACEHOLDER_LINE = "(Bitte Changelog-Einträge hier einfügen)"
|
||||
|
||||
# Schlüsselwörter aus Commit-Messages → lesbarer Kategorie-Text
|
||||
# Reihenfolge bestimmt Priorität (erster Treffer gewinnt).
|
||||
CATEGORY_RULES: list[tuple[re.Pattern, str]] = [
|
||||
(re.compile(r"trakt", re.I), "Trakt"),
|
||||
(re.compile(r"serienstream|serien.?stream", re.I), "SerienStream"),
|
||||
(re.compile(r"aniworld", re.I), "AniWorld"),
|
||||
(re.compile(r"youtube|yt.?dlp", re.I), "YouTube"),
|
||||
(re.compile(r"tmdb|metadat", re.I), "Metadaten"),
|
||||
(re.compile(r"suche|search", re.I), "Suche"),
|
||||
(re.compile(r"setting|einstellung", re.I), "Einstellungen"),
|
||||
(re.compile(r"update|version", re.I), "Updates"),
|
||||
(re.compile(r"moflix", re.I), "Moflix"),
|
||||
(re.compile(r"filmpalast", re.I), "Filmpalast"),
|
||||
(re.compile(r"kkiste", re.I), "KKiste"),
|
||||
(re.compile(r"doku.?stream", re.I), "Doku-Streams"),
|
||||
(re.compile(r"topstream", re.I), "Topstreamfilm"),
|
||||
(re.compile(r"einschalten", re.I), "Einschalten"),
|
||||
(re.compile(r"hdfilme", re.I), "HDFilme"),
|
||||
(re.compile(r"netzkino", re.I), "NetzkKino"),
|
||||
]
|
||||
|
||||
|
||||
def get_version(addon_xml: Path) -> str:
|
||||
root = ET.parse(addon_xml).getroot()
|
||||
return root.attrib.get("version", "").strip()
|
||||
|
||||
|
||||
def strip_suffix(version: str) -> str:
|
||||
return re.sub(r"-(dev|nightly|beta|alpha).*$", "", version)
|
||||
|
||||
|
||||
def get_prev_tag(current_tag: str) -> str | None:
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "tag", "--sort=-version:refname"],
|
||||
capture_output=True, text=True, check=True,
|
||||
)
|
||||
tags = [t.strip() for t in result.stdout.splitlines() if t.strip()]
|
||||
if current_tag in tags:
|
||||
idx = tags.index(current_tag)
|
||||
return tags[idx + 1] if idx + 1 < len(tags) else None
|
||||
return tags[0] if tags else None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def get_commits_since(prev_tag: str | None) -> list[str]:
|
||||
"""Gibt Commit-Subjects seit prev_tag zurück (ohne den Tag-Commit selbst)."""
|
||||
try:
|
||||
ref = f"{prev_tag}..HEAD" if prev_tag else "HEAD"
|
||||
result = subprocess.run(
|
||||
["git", "log", ref, "--pretty=format:%s"],
|
||||
capture_output=True, text=True, check=True,
|
||||
)
|
||||
return [l.strip() for l in result.stdout.splitlines() if l.strip()]
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def extract_description(subject: str) -> str:
|
||||
"""Extrahiert den lesbaren Teil aus einer Commit-Message."""
|
||||
# "dev: bump to 0.1.86.0-dev Beschreibung hier" → "Beschreibung hier"
|
||||
m = re.match(r"^(?:dev|nightly|main):\s*bump\s+to\s+[\d.]+(?:-\w+)?\s*(.+)$", subject, re.I)
|
||||
if m:
|
||||
return m.group(1).strip()
|
||||
# "dev: Beschreibung" → "Beschreibung"
|
||||
m = re.match(r"^(?:dev|nightly|main):\s*(.+)$", subject, re.I)
|
||||
if m:
|
||||
return m.group(1).strip()
|
||||
return subject
|
||||
|
||||
|
||||
def split_description(desc: str) -> list[str]:
|
||||
"""Zerlegt kommagetrennte Beschreibungen in Einzelpunkte."""
|
||||
parts = [p.strip() for p in re.split(r",\s*(?=[A-ZÄÖÜ\w])", desc) if p.strip()]
|
||||
return parts if parts else [desc]
|
||||
|
||||
|
||||
def categorize(items: list[str]) -> dict[str, list[str]]:
|
||||
"""Gruppiert Beschreibungs-Punkte nach Kategorie."""
|
||||
categories: dict[str, list[str]] = {}
|
||||
for item in items:
|
||||
cat = "Allgemein"
|
||||
for pattern, label in CATEGORY_RULES:
|
||||
if pattern.search(item):
|
||||
cat = label
|
||||
break
|
||||
categories.setdefault(cat, []).append(item)
|
||||
return categories
|
||||
|
||||
|
||||
def build_changelog_text(commits: list[str]) -> str:
|
||||
"""Erzeugt lesbaren Changelog-Text aus Commit-Subjects."""
|
||||
all_items: list[str] = []
|
||||
for subject in commits:
|
||||
desc = extract_description(subject)
|
||||
if desc:
|
||||
all_items.extend(split_description(desc))
|
||||
|
||||
if not all_items:
|
||||
return "- Verschiedene Verbesserungen und Fehlerbehebungen"
|
||||
|
||||
categories = categorize(all_items)
|
||||
lines: list[str] = []
|
||||
for cat, items in categories.items():
|
||||
lines.append(f"**{cat}**")
|
||||
for item in items:
|
||||
# Ersten Buchstaben groß, Punkt am Ende
|
||||
item = item[0].upper() + item[1:] if item else item
|
||||
if not item.endswith((".","!","?")):
|
||||
item += "."
|
||||
lines.append(f"- {item}")
|
||||
lines.append("")
|
||||
return "\n".join(lines).rstrip()
|
||||
|
||||
|
||||
def section_exists(lines: list[str], header: str) -> bool:
|
||||
return any(line.strip() == header for line in lines)
|
||||
|
||||
|
||||
def section_is_placeholder(lines: list[str], header: str) -> bool:
|
||||
in_section = False
|
||||
content_lines = []
|
||||
for line in lines:
|
||||
if line.strip() == header:
|
||||
in_section = True
|
||||
continue
|
||||
if in_section:
|
||||
if line.startswith("## "):
|
||||
break
|
||||
stripped = line.strip()
|
||||
if stripped:
|
||||
content_lines.append(stripped)
|
||||
return not content_lines or all(l == PLACEHOLDER_LINE for l in content_lines)
|
||||
|
||||
|
||||
def replace_section_content(lines: list[str], header: str, new_content: str) -> list[str]:
|
||||
"""Ersetzt den Inhalt eines bestehenden Abschnitts."""
|
||||
result: list[str] = []
|
||||
in_section = False
|
||||
content_written = False
|
||||
for line in lines:
|
||||
if line.strip() == header:
|
||||
result.append(line)
|
||||
result.append("")
|
||||
result.extend(new_content.splitlines())
|
||||
result.append("")
|
||||
in_section = True
|
||||
content_written = True
|
||||
continue
|
||||
if in_section:
|
||||
if line.startswith("## "):
|
||||
in_section = False
|
||||
result.append(line)
|
||||
# Alte Zeilen des Abschnitts überspringen
|
||||
continue
|
||||
result.append(line)
|
||||
return result
|
||||
|
||||
|
||||
def insert_section(lines: list[str], header: str, content: str) -> list[str]:
|
||||
new_section = [header, "", *content.splitlines(), ""]
|
||||
return new_section + ([""] if lines and lines[0].strip() else []) + lines
|
||||
|
||||
|
||||
def main() -> None:
|
||||
if len(sys.argv) < 4 or sys.argv[1] not in ("--fill", "--check"):
|
||||
print(f"Usage: {sys.argv[0]} --fill|--check <addon.xml> <CHANGELOG-USER.md>", file=sys.stderr)
|
||||
sys.exit(2)
|
||||
|
||||
mode = sys.argv[1]
|
||||
addon_xml = Path(sys.argv[2])
|
||||
changelog = Path(sys.argv[3])
|
||||
|
||||
version = get_version(addon_xml)
|
||||
if not version:
|
||||
print("Fehler: Version konnte nicht aus addon.xml gelesen werden.", file=sys.stderr)
|
||||
sys.exit(2)
|
||||
|
||||
clean_version = strip_suffix(version)
|
||||
header = f"## {clean_version}"
|
||||
lines = changelog.read_text(encoding="utf-8").splitlines() if changelog.exists() else []
|
||||
|
||||
if mode == "--check":
|
||||
if not section_exists(lines, header):
|
||||
print(f"FEHLER: Kein Changelog-Abschnitt für {clean_version} in {changelog}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
if section_is_placeholder(lines, header):
|
||||
print(f"FEHLER: Changelog-Abschnitt für {clean_version} ist noch ein Platzhalter.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
print(f"OK: Changelog-Abschnitt für {clean_version} vorhanden.")
|
||||
sys.exit(0)
|
||||
|
||||
# --fill
|
||||
current_tag = f"v{version}"
|
||||
prev_tag = get_prev_tag(current_tag)
|
||||
commits = get_commits_since(prev_tag)
|
||||
content = build_changelog_text(commits)
|
||||
|
||||
if section_exists(lines, header):
|
||||
if not section_is_placeholder(lines, header):
|
||||
print(f"Abschnitt {header} bereits gefüllt – keine Änderung.")
|
||||
sys.exit(0)
|
||||
new_lines = replace_section_content(lines, header, content)
|
||||
else:
|
||||
new_lines = insert_section(lines, header, content)
|
||||
|
||||
changelog.write_text("\n".join(new_lines) + "\n", encoding="utf-8")
|
||||
print(f"Changelog für {header} geschrieben.")
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user